instance_id stringlengths 17 39 | repo stringclasses 8 values | issue_id stringlengths 14 34 | pr_id stringlengths 14 34 | linking_methods listlengths 1 3 | base_commit stringlengths 40 40 | merge_commit stringlengths 0 40 ⌀ | hints_text listlengths 0 106 | resolved_comments listlengths 0 119 | created_at timestamp[ns, tz=UTC] | labeled_as listlengths 0 7 | problem_title stringlengths 7 174 | problem_statement stringlengths 0 55.4k | gold_files listlengths 0 10 | gold_files_postpatch listlengths 1 10 | test_files listlengths 0 60 | gold_patch stringlengths 220 5.83M | test_patch stringlengths 386 194k ⌀ | split_random stringclasses 3 values | split_time stringclasses 3 values | issue_start_time timestamp[ns] | issue_created_at timestamp[ns, tz=UTC] | issue_by_user stringlengths 3 21 | split_repo stringclasses 3 values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
netty/netty/3945_3948 | netty/netty | netty/netty/3945 | netty/netty/3948 | [
"timestamp(timedelta=11.0, similarity=0.9110794132522602)"
] | a7f83aa23ebe14669119aadb2bbd3012f94fdf64 | 2ca6e201d9d4aac2929e67710d76f28a725214c0 | [
"@blucas makes sense. We love PRs as you know ;)\n",
"+1 to PRs :)\n",
"Let me fix this.\n",
"Fixed by https://github.com/netty/netty/pull/3948\n"
] | [] | 2015-07-07T17:33:17Z | [
"defect"
] | Http2ConnectionHandler breaks channelReadComplete pipeline notification | Netty Version: 5.0.0-Alpha3-SNAPSHOT
Hi guys,
`Http2ConnectionHandler` overrides `channelReadComplete(...)` [here](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L421) but fails to pass notification further down the pipeline by calling `super.channelReadComplete(...)` or `ctx.fireChannelReadComplete(...)`. I assume it should call `super.channelReadComplete(...)`, what are your thoughts?
/cc @Scottmitch @nmittler @louiscryan as you guys know more about this :)
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 844cef3c5ba..36073ed1bcc 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -455,7 +455,11 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
// Trigger flush after read on the assumption that flush is cheap if there is nothing to write and that
// for flow-control the read may release window that causes data to be written that can now be flushed.
- flush(ctx);
+ try {
+ flush(ctx);
+ } finally {
+ super.channelReadComplete(ctx);
+ }
}
/**
| null | train | train | 2015-07-07T22:50:23 | 2015-07-07T09:15:14Z | blucas | val |
netty/netty/3921_3951 | netty/netty | netty/netty/3921 | netty/netty/3951 | [
"timestamp(timedelta=78683.0, similarity=0.8417185682114167)"
] | 95b7de3a9f2260ac7a5ce41b878fe3c07b9d8200 | 8b7a8986499551052ab61470d990ab004eaa05a9 | [
"I misread the code, I thought it was adding the handlers after registering with the event loop, but it's actually adding the handlers before registering with the event loop, which is clearly not allowed.\n",
"@jroper can you show me the code you use ?\n",
"Sorry, I just noticed... the problem is that my `handl... | [
"Can you do this with real javadocs style ?\n\n``` java\n/**\n * ...\n * ...\n */\n```\n",
"same as above \n",
"also could you open an openjdk issue and link it ?\n",
"@normanmaurer Sure but what does this buy us?\n",
"Add CRLF after `**` \n",
"Add CRLF after `**`\n",
"That you can click the link ;)\n\n... | 2015-07-07T18:51:37Z | [
"defect"
] | EmbeddedChannel constructor throws IllegalStateException | On Netty 4.0.29, when I try to construct an `EmbeddedChannel`, I get an `IllegalStateException`:
```
java.lang.IllegalStateException: channel not registered to an event loop
at io.netty.channel.AbstractChannel.eventLoop(AbstractChannel.java:111)
at io.netty.channel.AbstractChannelHandlerContext.executor(AbstractChannelHandlerContext.java:103)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:222)
at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:834)
at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:504)
at io.netty.channel.DefaultChannelPipeline.callHandlerAdded(DefaultChannelPipeline.java:482)
at io.netty.channel.DefaultChannelPipeline.addLast0(DefaultChannelPipeline.java:146)
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:129)
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:257)
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:244)
at io.netty.channel.embedded.EmbeddedChannel.<init>(EmbeddedChannel.java:79)
```
I'm not sure what the problem is, it first registers the event loop, this is usually an asynchronous operation so adding the handler straight after would usually be a race condition, but the implementation of the `EmbeddedEventLoop` register looks like it should be synchronous, so it looks like it should work with no problems.
| [
"transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java",
"transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java"
] | [
"transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java",
"transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java
index ba68408c23c..df6232e2a8b 100644
--- a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java
+++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java
@@ -51,7 +51,7 @@ public abstract class AbstractNioChannel extends AbstractChannel {
private static final InternalLogger logger =
InternalLoggerFactory.getInstance(AbstractNioChannel.class);
- private static final ClosedChannelException CLOSED_CHANNEL_EXCEPTION = new ClosedChannelException();
+ protected static final ClosedChannelException CLOSED_CHANNEL_EXCEPTION = new ClosedChannelException();
static {
CLOSED_CHANNEL_EXCEPTION.setStackTrace(EmptyArrays.EMPTY_STACK_TRACE);
diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java
index 5d7aa121ba3..b706d453fad 100644
--- a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java
+++ b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java
@@ -284,7 +284,21 @@ protected void doWrite(ChannelOutboundBuffer in) throws Exception {
// Only one ByteBuf so use non-gathering write
ByteBuffer nioBuffer = nioBuffers[0];
for (int i = config().getWriteSpinCount() - 1; i >= 0; i --) {
- final int localWrittenBytes = ch.write(nioBuffer);
+ final int localWrittenBytes;
+ try {
+ localWrittenBytes = ch.write(nioBuffer);
+ } catch (IllegalArgumentException e) {
+ /**
+ * There appears to be a JDK bug that when a write operation occurs on a direct buffer,
+ * and the associated FD is closed, the write operation is done and the position of the
+ * ByteBuffer is incremented using a "large" number from the write on an invalid FD which
+ * exceeds the limit of the ByteBuffer and throws an IllegalArgumentException. The higher
+ * levels of Netty expect an IOException so we must re-throw. See
+ * <a href="http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/
+ * sun/nio/ch/IOUtil.java#96">IOUtil.java</a>.
+ */
+ throw CLOSED_CHANNEL_EXCEPTION;
+ }
if (localWrittenBytes == 0) {
setOpWrite = true;
break;
@@ -299,7 +313,21 @@ protected void doWrite(ChannelOutboundBuffer in) throws Exception {
break;
default:
for (int i = config().getWriteSpinCount() - 1; i >= 0; i --) {
- final long localWrittenBytes = ch.write(nioBuffers, 0, nioBufferCnt);
+ final long localWrittenBytes;
+ try {
+ localWrittenBytes = ch.write(nioBuffers, 0, nioBufferCnt);
+ } catch (IllegalArgumentException e) {
+ /**
+ * There appears to be a JDK bug that when a write operation occurs on a direct buffer,
+ * and the associated FD is closed, the write operation is done and the position of the
+ * ByteBuffer is incremented using a "large" number from the write on an invalid FD which
+ * exceeds the limit of the ByteBuffer and throws an IllegalArgumentException. The higher
+ * levels of Netty expect an IOException so we must re-throw. See
+ * <a href="http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/
+ * sun/nio/ch/IOUtil.java#96">IOUtil.java</a>.
+ */
+ throw CLOSED_CHANNEL_EXCEPTION;
+ }
if (localWrittenBytes == 0) {
setOpWrite = true;
break;
| null | train | train | 2015-07-07T10:00:56 | 2015-06-29T08:00:52Z | jroper | val |
netty/netty/3946_3975 | netty/netty | netty/netty/3946 | netty/netty/3975 | [
"timestamp(timedelta=1325.0, similarity=0.8820260019713634)"
] | 3d6819623efc2f634009094ac8cf6a362fd3c2c1 | a38b9761c9c81f51945d70400f99e5ba687b91d1 | [
"/cc @Scottmitch \n\nLooks like `HttpToHttp2ConnectionHandler` currently requires `FullHttpMessage`. I don't recall whether or not there was a reason that it couldn't take `HttpMessage` or `HttpContent`. Contributions are always welcome :) \n",
"I think the main driver was HTTP generally has no proper framing a... | [
"Should probably set `release = false` just before the write.\n",
"Just to aid in readability, can you separate these blocks out into separate methods (e.g. `writeHttpMessage` and `writeHttpContent`)?\n",
"I think we should be able to simplify the logic a bit. Could we do something like this:\n\n``` java\nHttp... | 2015-07-11T11:14:15Z | [] | HttpToHttp2ConnectionHandler does not support converting from Http(Message|Content) to Http2 frames | Is there a reason why `HttpToHttp2ConnectionHandler` does not support converting from `Http(Message|Content)` to Http2 frames? Is there something that the handler requires that it cannot retrieve? Or some data missing from `Http(Message|Content)` that is required?
My use case is that I have an `HttpContentCompressor` further down the pipeline, which ends up creating `HttpMessage` and `HttpContent` objects and sending them to `HttpToHttp2ConnectionHandler` which is then not being converted to Http2 frames.
For now I have extended `HttpToHttp2ConnectionHandler` and overridden the `write(...)` to support `Http(Message|Content)`, but I think something like this should be in Netty directly.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java
index ca3676617d8..6bb45045dcf 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java
@@ -15,11 +15,16 @@
package io.netty.handler.codec.http2;
+import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.http.FullHttpMessage;
+import io.netty.handler.codec.http.HttpContent;
import io.netty.handler.codec.http.HttpHeaders;
+import io.netty.handler.codec.http.HttpMessage;
+import io.netty.handler.codec.http.LastHttpContent;
import io.netty.handler.codec.http2.Http2CodecUtil.SimpleChannelPromiseAggregator;
+import io.netty.util.ReferenceCountUtil;
/**
* Translates HTTP/1.x object writes into HTTP/2 frames.
@@ -27,6 +32,9 @@
* See {@link InboundHttp2ToHttpAdapter} to get translation from HTTP/2 frames to HTTP/1.x objects.
*/
public class HttpToHttp2ConnectionHandler extends Http2ConnectionHandler {
+
+ private int currentStreamId;
+
public HttpToHttp2ConnectionHandler(boolean server, Http2FrameListener listener) {
super(server, listener);
}
@@ -57,45 +65,64 @@ private int getStreamId(HttpHeaders httpHeaders) throws Exception {
}
/**
- * Handles conversion of a {@link FullHttpMessage} to HTTP/2 frames.
+ * Handles conversion of {@link HttpMessage} and {@link HttpContent} to HTTP/2 frames.
*/
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
- if (msg instanceof FullHttpMessage) {
- FullHttpMessage httpMsg = (FullHttpMessage) msg;
- boolean hasData = httpMsg.content().isReadable();
- boolean httpMsgNeedRelease = true;
- SimpleChannelPromiseAggregator promiseAggregator = null;
- try {
+
+ if (!(msg instanceof HttpMessage || msg instanceof HttpContent)) {
+ ctx.write(msg, promise);
+ return;
+ }
+
+ boolean release = true;
+ SimpleChannelPromiseAggregator promiseAggregator =
+ new SimpleChannelPromiseAggregator(promise, ctx.channel(), ctx.executor());
+ try {
+ Http2ConnectionEncoder encoder = encoder();
+ boolean endStream = false;
+ if (msg instanceof HttpMessage) {
+ final HttpMessage httpMsg = (HttpMessage) msg;
+
// Provide the user the opportunity to specify the streamId
- int streamId = getStreamId(httpMsg.headers());
+ currentStreamId = getStreamId(httpMsg.headers());
// Convert and write the headers.
Http2Headers http2Headers = HttpUtil.toHttp2Headers(httpMsg);
- Http2ConnectionEncoder encoder = encoder();
-
- if (hasData) {
- promiseAggregator = new SimpleChannelPromiseAggregator(promise, ctx.channel(), ctx.executor());
- encoder.writeHeaders(ctx, streamId, http2Headers, 0, false, promiseAggregator.newPromise());
- httpMsgNeedRelease = false;
- encoder.writeData(ctx, streamId, httpMsg.content(), 0, true, promiseAggregator.newPromise());
- promiseAggregator.doneAllocatingPromises();
- } else {
- encoder.writeHeaders(ctx, streamId, http2Headers, 0, true, promise);
- }
- } catch (Throwable t) {
- if (promiseAggregator == null) {
- promise.tryFailure(t);
- } else {
- promiseAggregator.setFailure(t);
+ endStream = msg instanceof FullHttpMessage && !((FullHttpMessage) msg).content().isReadable();
+ encoder.writeHeaders(ctx, currentStreamId, http2Headers, 0, endStream, promiseAggregator.newPromise());
+ }
+
+ if (!endStream && msg instanceof HttpContent) {
+ boolean isLastContent = false;
+ Http2Headers trailers = EmptyHttp2Headers.INSTANCE;
+ if (msg instanceof LastHttpContent) {
+ isLastContent = true;
+
+ // Convert any trailing headers.
+ final LastHttpContent lastContent = (LastHttpContent) msg;
+ trailers = HttpUtil.toHttp2Headers(lastContent.trailingHeaders());
}
- } finally {
- if (httpMsgNeedRelease) {
- httpMsg.release();
+
+ // Write the data
+ final ByteBuf content = ((HttpContent) msg).content();
+ endStream = isLastContent && trailers.isEmpty();
+ release = false;
+ encoder.writeData(ctx, currentStreamId, content, 0, endStream, promiseAggregator.newPromise());
+
+ if (!trailers.isEmpty()) {
+ // Write trailing headers.
+ encoder.writeHeaders(ctx, currentStreamId, trailers, 0, true, promiseAggregator.newPromise());
}
}
- } else {
- ctx.write(msg, promise);
+
+ promiseAggregator.doneAllocatingPromises();
+ } catch (Throwable t) {
+ promiseAggregator.setFailure(t);
+ } finally {
+ if (release) {
+ ReferenceCountUtil.release(msg);
+ }
}
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java
index 066f3032a8b..61c7bbb9dbc 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java
@@ -29,6 +29,7 @@
import io.netty.handler.codec.http.HttpHeaderUtil;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpHeaders;
+import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.HttpResponse;
@@ -265,7 +266,7 @@ public static void addHttp2ToHttpHeaders(int streamId, Http2Headers sourceHeader
/**
* Converts the given HTTP/1.x headers into HTTP/2 headers.
*/
- public static Http2Headers toHttp2Headers(FullHttpMessage in) throws Exception {
+ public static Http2Headers toHttp2Headers(HttpMessage in) throws Exception {
final Http2Headers out = new DefaultHttp2Headers();
HttpHeaders inHeaders = in.headers();
if (in instanceof HttpRequest) {
@@ -304,6 +305,16 @@ public static Http2Headers toHttp2Headers(FullHttpMessage in) throws Exception {
}
// Add the HTTP headers which have not been consumed above
+ return out.add(toHttp2Headers(inHeaders));
+ }
+
+ public static Http2Headers toHttp2Headers(HttpHeaders inHeaders) throws Exception {
+ if (inHeaders.isEmpty()) {
+ return EmptyHttp2Headers.INSTANCE;
+ }
+
+ final Http2Headers out = new DefaultHttp2Headers();
+
inHeaders.forEachEntry(new EntryVisitor() {
@Override
public boolean visit(Entry<CharSequence, CharSequence> entry) throws Exception {
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
index 04d5f7a7434..9960de21e7d 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
@@ -45,9 +45,14 @@
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.http.DefaultFullHttpRequest;
+import io.netty.handler.codec.http.DefaultHttpContent;
+import io.netty.handler.codec.http.DefaultHttpRequest;
+import io.netty.handler.codec.http.DefaultLastHttpContent;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpHeaders;
+import io.netty.handler.codec.http.HttpRequest;
+import io.netty.handler.codec.http.LastHttpContent;
import io.netty.handler.codec.http2.Http2TestUtil.FrameCountDown;
import io.netty.util.NetUtil;
import io.netty.util.concurrent.Future;
@@ -84,6 +89,7 @@ public class HttpToHttp2ConnectionHandlerTest {
private Channel clientChannel;
private CountDownLatch requestLatch;
private CountDownLatch serverSettingsAckLatch;
+ private CountDownLatch trailersLatch;
private FrameCountDown serverFrameCountDown;
@Before
@@ -104,7 +110,7 @@ public void teardown() throws Exception {
@Test
public void testJustHeadersRequest() throws Exception {
- bootstrapEnv(2, 1);
+ bootstrapEnv(2, 1, 0);
final FullHttpRequest request = new DefaultFullHttpRequest(HTTP_1_1, GET, "/example");
final HttpHeaders httpHeaders = request.headers();
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 5);
@@ -146,7 +152,7 @@ public Void answer(InvocationOnMock in) throws Throwable {
}
}).when(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3),
any(ByteBuf.class), eq(0), eq(true));
- bootstrapEnv(3, 1);
+ bootstrapEnv(3, 1, 0);
final FullHttpRequest request = new DefaultFullHttpRequest(HTTP_1_1, POST, "/example",
Unpooled.copiedBuffer(text, UTF_8));
final HttpHeaders httpHeaders = request.headers();
@@ -175,9 +181,127 @@ public Void answer(InvocationOnMock in) throws Throwable {
assertEquals(text, receivedBuffers.get(0));
}
- private void bootstrapEnv(int requestCountDown, int serverSettingsAckCount) throws Exception {
+ @Test
+ public void testRequestWithBodyAndTrailingHeaders() throws Exception {
+ final String text = "foooooogoooo";
+ final List<String> receivedBuffers = Collections.synchronizedList(new ArrayList<String>());
+ doAnswer(new Answer<Void>() {
+ @Override
+ public Void answer(InvocationOnMock in) throws Throwable {
+ receivedBuffers.add(((ByteBuf) in.getArguments()[2]).toString(UTF_8));
+ return null;
+ }
+ }).when(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3),
+ any(ByteBuf.class), eq(0), eq(false));
+ bootstrapEnv(4, 1, 1);
+ final FullHttpRequest request = new DefaultFullHttpRequest(HTTP_1_1, POST, "/example",
+ Unpooled.copiedBuffer(text, UTF_8));
+ final HttpHeaders httpHeaders = request.headers();
+ httpHeaders.set(HttpHeaderNames.HOST, "http://your_user-name123@www.example.org:5555/example");
+ httpHeaders.add("foo", "goo");
+ httpHeaders.add("foo", "goo2");
+ httpHeaders.add("foo2", "goo2");
+ final Http2Headers http2Headers =
+ new DefaultHttp2Headers().method(as("POST")).path(as("/example"))
+ .authority(as("www.example.org:5555")).scheme(as("http"))
+ .add(as("foo"), as("goo")).add(as("foo"), as("goo2"))
+ .add(as("foo2"), as("goo2"));
+
+ request.trailingHeaders().add("trailing", "bar");
+
+ final Http2Headers http2TrailingHeaders = new DefaultHttp2Headers().add(as("trailing"), as("bar"));
+
+ ChannelPromise writePromise = newPromise();
+ ChannelFuture writeFuture = clientChannel.writeAndFlush(request, writePromise);
+
+ assertTrue(writePromise.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(writePromise.isSuccess());
+ assertTrue(writeFuture.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(writeFuture.isSuccess());
+ awaitRequests();
+ verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(http2Headers), eq(0),
+ anyShort(), anyBoolean(), eq(0), eq(false));
+ verify(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3), any(ByteBuf.class), eq(0),
+ eq(false));
+ verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(http2TrailingHeaders), eq(0),
+ anyShort(), anyBoolean(), eq(0), eq(true));
+ assertEquals(1, receivedBuffers.size());
+ assertEquals(text, receivedBuffers.get(0));
+ }
+
+ @Test
+ public void testChunkedRequestWithBodyAndTrailingHeaders() throws Exception {
+ final String text = "foooooo";
+ final String text2 = "goooo";
+ final List<String> receivedBuffers = Collections.synchronizedList(new ArrayList<String>());
+ doAnswer(new Answer<Void>() {
+ @Override
+ public Void answer(InvocationOnMock in) throws Throwable {
+ receivedBuffers.add(((ByteBuf) in.getArguments()[2]).toString(UTF_8));
+ return null;
+ }
+ }).when(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3),
+ any(ByteBuf.class), eq(0), eq(false));
+ bootstrapEnv(4, 1, 1);
+ final HttpRequest request = new DefaultHttpRequest(HTTP_1_1, POST, "/example");
+ final HttpHeaders httpHeaders = request.headers();
+ httpHeaders.set(HttpHeaderNames.HOST, "http://your_user-name123@www.example.org:5555/example");
+ httpHeaders.add(HttpHeaderNames.TRANSFER_ENCODING, "chunked");
+ httpHeaders.add("foo", "goo");
+ httpHeaders.add("foo", "goo2");
+ httpHeaders.add("foo2", "goo2");
+ final Http2Headers http2Headers =
+ new DefaultHttp2Headers().method(as("POST")).path(as("/example"))
+ .authority(as("www.example.org:5555")).scheme(as("http"))
+ .add(as("foo"), as("goo")).add(as("foo"), as("goo2"))
+ .add(as("foo2"), as("goo2"));
+
+ final DefaultHttpContent httpContent = new DefaultHttpContent(Unpooled.copiedBuffer(text, UTF_8));
+ final LastHttpContent lastHttpContent = new DefaultLastHttpContent(Unpooled.copiedBuffer(text2, UTF_8));
+
+ lastHttpContent.trailingHeaders().add("trailing", "bar");
+
+ final Http2Headers http2TrailingHeaders = new DefaultHttp2Headers().add(as("trailing"), as("bar"));
+
+ ChannelPromise writePromise = newPromise();
+ ChannelFuture writeFuture = clientChannel.write(request, writePromise);
+ ChannelPromise contentPromise = newPromise();
+ ChannelFuture contentFuture = clientChannel.write(httpContent, contentPromise);
+ ChannelPromise lastContentPromise = newPromise();
+ ChannelFuture lastContentFuture = clientChannel.write(lastHttpContent, lastContentPromise);
+
+ clientChannel.flush();
+
+ assertTrue(writePromise.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(writePromise.isSuccess());
+ assertTrue(writeFuture.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(writeFuture.isSuccess());
+
+ assertTrue(contentPromise.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(contentPromise.isSuccess());
+ assertTrue(contentFuture.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(contentFuture.isSuccess());
+
+ assertTrue(lastContentPromise.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(lastContentPromise.isSuccess());
+ assertTrue(lastContentFuture.awaitUninterruptibly(WAIT_TIME_SECONDS, SECONDS));
+ assertTrue(lastContentFuture.isSuccess());
+
+ awaitRequests();
+ verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(http2Headers), eq(0),
+ anyShort(), anyBoolean(), eq(0), eq(false));
+ verify(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3), any(ByteBuf.class), eq(0),
+ eq(false));
+ verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(http2TrailingHeaders), eq(0),
+ anyShort(), anyBoolean(), eq(0), eq(true));
+ assertEquals(1, receivedBuffers.size());
+ assertEquals(text + text2, receivedBuffers.get(0));
+ }
+
+ private void bootstrapEnv(int requestCountDown, int serverSettingsAckCount, int trailersCount) throws Exception {
requestLatch = new CountDownLatch(requestCountDown);
serverSettingsAckLatch = new CountDownLatch(serverSettingsAckCount);
+ trailersLatch = trailersCount == 0 ? null : new CountDownLatch(trailersCount);
sb = new ServerBootstrap();
cb = new Bootstrap();
@@ -188,7 +312,8 @@ private void bootstrapEnv(int requestCountDown, int serverSettingsAckCount) thro
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
- serverFrameCountDown = new FrameCountDown(serverListener, serverSettingsAckLatch, requestLatch);
+ serverFrameCountDown =
+ new FrameCountDown(serverListener, serverSettingsAckLatch, requestLatch, null, trailersLatch);
p.addLast(new HttpToHttp2ConnectionHandler(true, serverFrameCountDown));
}
});
@@ -213,6 +338,10 @@ protected void initChannel(Channel ch) throws Exception {
private void awaitRequests() throws Exception {
assertTrue(requestLatch.await(WAIT_TIME_SECONDS, SECONDS));
+ if (trailersLatch != null) {
+ assertTrue(trailersLatch.await(WAIT_TIME_SECONDS, SECONDS));
+ }
+ assertTrue(serverSettingsAckLatch.await(WAIT_TIME_SECONDS, SECONDS));
}
private ChannelHandlerContext ctx() {
| train | train | 2015-07-20T22:47:00 | 2015-07-07T09:29:41Z | blucas | val |
netty/netty/3970_3984 | netty/netty | netty/netty/3970 | netty/netty/3984 | [
"timestamp(timedelta=57.0, similarity=0.8544424543669463)"
] | 3ec02bf7460c3fda4d382bdf4c3b9ef4c67704e1 | 0b938ec911b01027ae1d8b566c933cc343d97d1d | [
"Just do something like this:\n\n``` java\nclass IntObjectHashMap implements Map<Integer, Object> {\n public Object put(Integer key, Object value) {\n put(key.intValue(), value);\n }\n\n public Object put(int key, Object value) {\n // implementation goes here.\n }\n}\n```\n\nThat way one can use this as a... | [
"~~@nmittler those cast's aren't necessary no?~~\n",
"@nmittler I would be in favor of getting rid of `PEntry`. Don't think it's really worth it. WDYT? // @normanmaurer @Scottmitch \n",
"~~@nmittler can we do something like `containsKey(@primitiveType@)` i.e. `containsKey(int key)`?~~\n",
"Yes they are necess... | 2015-07-13T21:57:09Z | [
"improvement"
] | Have (Int,Long,Char,...)ObjectMap implement Map interface. | I just noticed that Netty's primitive hash map implementations don't implement the JDK's `Map` interface. I think they would be much more useful if they did.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java",
"codec-http2/src/main/java/io/netty/handler/codec/... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java",
"codec-http2/src/main/java/io/netty/handler/codec/... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java",
"common/src/test/templates/io.netty.util.collection/KObjectHashMapTest.template"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 71a574f6cc1..c437282f62e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -387,8 +387,7 @@ public final int numChildren() {
@Override
public Http2Stream forEachChild(Http2StreamVisitor visitor) throws Http2Exception {
- for (IntObjectHashMap.Entry<DefaultStream> entry : children.entries()) {
- Http2Stream stream = entry.value();
+ for (DefaultStream stream : children.values()) {
if (!visitor.visit(stream)) {
return stream;
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
index 15568344ac4..035fc949e81 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
@@ -484,7 +484,7 @@ private void readSettingsFrame(ChannelHandlerContext ctx, ByteBuf payload,
char id = (char) payload.readUnsignedShort();
long value = payload.readUnsignedInt();
try {
- settings.put(id, value);
+ settings.put(id, Long.valueOf(value));
} catch (IllegalArgumentException e) {
switch(id) {
case SETTINGS_MAX_FRAME_SIZE:
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
index 036537aa2d9..20fc26baa29 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
@@ -213,7 +213,7 @@ public ChannelFuture writeSettings(ChannelHandlerContext ctx, Http2Settings sett
int payloadLength = SETTING_ENTRY_LENGTH * settings.size();
ByteBuf buf = ctx.alloc().buffer(FRAME_HEADER_LENGTH + settings.size() * SETTING_ENTRY_LENGTH);
writeFrameHeaderInternal(buf, payloadLength, SETTINGS, new Http2Flags(), 0);
- for (CharObjectMap.Entry<Long> entry : settings.entries()) {
+ for (CharObjectMap.PrimitiveEntry<Long> entry : settings.entries()) {
writeUnsignedShort(entry.key(), buf);
writeUnsignedInt(entry.value(), buf);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java
index 0cdb65b9416..dc13efb5eca 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java
@@ -14,28 +14,28 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.handler.codec.base64.Base64Dialect.URL_SAFE;
+import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_PROTOCOL_NAME;
+import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_SETTINGS_HEADER;
+import static io.netty.handler.codec.http2.Http2CodecUtil.SETTING_ENTRY_LENGTH;
+import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedInt;
+import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedShort;
+import static io.netty.util.CharsetUtil.UTF_8;
+import static io.netty.util.ReferenceCountUtil.release;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.base64.Base64;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpClientUpgradeHandler;
import io.netty.handler.codec.http.HttpRequest;
-import io.netty.util.collection.CharObjectHashMap;
+import io.netty.util.collection.CharObjectMap;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
-import static io.netty.handler.codec.base64.Base64Dialect.URL_SAFE;
-import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_PROTOCOL_NAME;
-import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_SETTINGS_HEADER;
-import static io.netty.handler.codec.http2.Http2CodecUtil.SETTING_ENTRY_LENGTH;
-import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedInt;
-import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedShort;
-import static io.netty.util.CharsetUtil.UTF_8;
-import static io.netty.util.ReferenceCountUtil.release;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
-
/**
* Client-side cleartext upgrade codec from HTTP to HTTP/2.
*/
@@ -105,7 +105,7 @@ private String getSettingsHeaderValue(ChannelHandlerContext ctx) {
// Serialize the payload of the SETTINGS frame.
int payloadLength = SETTING_ENTRY_LENGTH * settings.size();
buf = ctx.alloc().buffer(payloadLength);
- for (CharObjectHashMap.Entry<Long> entry : settings.entries()) {
+ for (CharObjectMap.PrimitiveEntry<Long> entry : settings.entries()) {
writeUnsignedShort(entry.key(), buf);
writeUnsignedInt(entry.value(), buf);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java
index 5aaca0555f6..da719e4bca8 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java
@@ -46,6 +46,8 @@ public final class Http2Settings extends CharObjectHashMap<Long> {
* the standard settings will not cause the map capacity to change.
*/
private static final int DEFAULT_CAPACITY = (int) (NUM_STANDARD_SETTINGS / DEFAULT_LOAD_FACTOR) + 1;
+ private static final Long FALSE = 0L;
+ private static final Long TRUE = 1L;
public Http2Settings() {
this(DEFAULT_CAPACITY);
@@ -84,7 +86,7 @@ public Long headerTableSize() {
* @throws IllegalArgumentException if verification of the setting fails.
*/
public Http2Settings headerTableSize(int value) {
- put(SETTINGS_HEADER_TABLE_SIZE, (long) value);
+ put(SETTINGS_HEADER_TABLE_SIZE, Long.valueOf(value));
return this;
}
@@ -96,14 +98,14 @@ public Boolean pushEnabled() {
if (value == null) {
return null;
}
- return value != 0L;
+ return TRUE.equals(value);
}
/**
* Sets the {@code SETTINGS_ENABLE_PUSH} value.
*/
public Http2Settings pushEnabled(boolean enabled) {
- put(SETTINGS_ENABLE_PUSH, enabled ? 1L : 0L);
+ put(SETTINGS_ENABLE_PUSH, enabled ? TRUE : FALSE);
return this;
}
@@ -120,7 +122,7 @@ public Long maxConcurrentStreams() {
* @throws IllegalArgumentException if verification of the setting fails.
*/
public Http2Settings maxConcurrentStreams(long value) {
- put(SETTINGS_MAX_CONCURRENT_STREAMS, value);
+ put(SETTINGS_MAX_CONCURRENT_STREAMS, Long.valueOf(value));
return this;
}
@@ -137,7 +139,7 @@ public Integer initialWindowSize() {
* @throws IllegalArgumentException if verification of the setting fails.
*/
public Http2Settings initialWindowSize(int value) {
- put(SETTINGS_INITIAL_WINDOW_SIZE, (long) value);
+ put(SETTINGS_INITIAL_WINDOW_SIZE, Long.valueOf(value));
return this;
}
@@ -154,7 +156,7 @@ public Integer maxFrameSize() {
* @throws IllegalArgumentException if verification of the setting fails.
*/
public Http2Settings maxFrameSize(int value) {
- put(SETTINGS_MAX_FRAME_SIZE, (long) value);
+ put(SETTINGS_MAX_FRAME_SIZE, Long.valueOf(value));
return this;
}
@@ -183,7 +185,7 @@ public Http2Settings maxHeaderListSize(int value) {
value = Integer.MAX_VALUE;
}
- put(SETTINGS_MAX_HEADER_LIST_SIZE, (long) value);
+ put(SETTINGS_MAX_HEADER_LIST_SIZE, Long.valueOf(value));
return this;
}
diff --git a/common/src/main/java/io/netty/util/collection/PrimitiveCollections.java b/common/src/main/java/io/netty/util/collection/PrimitiveCollections.java
index 99030708834..4cf1f53623b 100644
--- a/common/src/main/java/io/netty/util/collection/PrimitiveCollections.java
+++ b/common/src/main/java/io/netty/util/collection/PrimitiveCollections.java
@@ -14,12 +14,12 @@
*/
package io.netty.util.collection;
-import io.netty.util.internal.EmptyArrays;
-
import java.util.Collection;
import java.util.Collections;
import java.util.Iterator;
+import java.util.Map;
import java.util.NoSuchElementException;
+import java.util.Set;
/**
* Utility methods for primitive collections.
@@ -61,11 +61,6 @@ public Object put(int key, Object value) {
throw new UnsupportedOperationException("put");
}
- @Override
- public void putAll(IntObjectMap<Object> sourceMap) {
- throw new UnsupportedOperationException("putAll");
- }
-
@Override
public Object remove(int key) {
return null;
@@ -81,11 +76,21 @@ public boolean isEmpty() {
return true;
}
+ @Override
+ public boolean containsKey(Object key) {
+ return false;
+ }
+
@Override
public void clear() {
// Do nothing.
}
+ @Override
+ public Set<Integer> keySet() {
+ return Collections.emptySet();
+ }
+
@Override
public boolean containsKey(int key) {
return false;
@@ -97,24 +102,39 @@ public boolean containsValue(Object value) {
}
@Override
- public Iterable<Entry<Object>> entries() {
+ public Iterable<PrimitiveEntry<Object>> entries() {
return Collections.emptySet();
}
@Override
- public int[] keys() {
- return EmptyArrays.EMPTY_INTS;
+ public Object get(Object key) {
+ return null;
}
@Override
- public Object[] values(Class<Object> clazz) {
- return EmptyArrays.EMPTY_OBJECTS;
+ public Object put(Integer key, Object value) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Object remove(Object key) {
+ return null;
+ }
+
+ @Override
+ public void putAll(Map<? extends Integer, ?> m) {
+ throw new UnsupportedOperationException();
}
@Override
public Collection<Object> values() {
return Collections.emptyList();
}
+
+ @Override
+ public Set<Entry<Integer, Object>> entrySet() {
+ return Collections.emptySet();
+ }
}
/**
@@ -122,9 +142,12 @@ public Collection<Object> values() {
*
* @param <V> the value type stored in the map.
*/
- private static final class UnmodifiableIntObjectMap<V> implements IntObjectMap<V>,
- Iterable<IntObjectMap.Entry<V>> {
- final IntObjectMap<V> map;
+ private static final class UnmodifiableIntObjectMap<V> implements IntObjectMap<V> {
+ private final IntObjectMap<V> map;
+ private Set<Integer> keySet;
+ private Set<Entry<Integer, V>> entrySet;
+ private Collection<V> values;
+ private Iterable<PrimitiveEntry<V>> entries;
UnmodifiableIntObjectMap(IntObjectMap<V> map) {
this.map = map;
@@ -140,11 +163,6 @@ public V put(int key, V value) {
throw new UnsupportedOperationException("put");
}
- @Override
- public void putAll(IntObjectMap<V> sourceMap) {
- throw new UnsupportedOperationException("putAll");
- }
-
@Override
public V remove(int key) {
throw new UnsupportedOperationException("remove");
@@ -171,42 +189,80 @@ public boolean containsKey(int key) {
}
@Override
- public boolean containsValue(V value) {
+ public boolean containsValue(Object value) {
return map.containsValue(value);
}
@Override
- public Iterable<Entry<V>> entries() {
- return this;
+ public boolean containsKey(Object key) {
+ return map.containsKey(key);
+ }
+
+ @Override
+ public V get(Object key) {
+ return map.get(key);
+ }
+
+ @Override
+ public V put(Integer key, V value) {
+ throw new UnsupportedOperationException("put");
+ }
+
+ @Override
+ public V remove(Object key) {
+ throw new UnsupportedOperationException("remove");
}
@Override
- public Iterator<Entry<V>> iterator() {
- return new IteratorImpl(map.entries().iterator());
+ public void putAll(Map<? extends Integer, ? extends V> m) {
+ throw new UnsupportedOperationException("putAll");
}
@Override
- public int[] keys() {
- return map.keys();
+ public Iterable<PrimitiveEntry<V>> entries() {
+ if (entries == null) {
+ entries = new Iterable<PrimitiveEntry<V>>() {
+ @Override
+ public Iterator<PrimitiveEntry<V>> iterator() {
+ return new IteratorImpl(map.entries().iterator());
+ }
+ };
+ }
+
+ return entries;
}
@Override
- public V[] values(Class<V> clazz) {
- return map.values(clazz);
+ public Set<Integer> keySet() {
+ if (keySet == null) {
+ keySet = Collections.unmodifiableSet(map.keySet());
+ }
+ return keySet;
+ }
+
+ @Override
+ public Set<Entry<Integer, V>> entrySet() {
+ if (entrySet == null) {
+ entrySet = Collections.unmodifiableSet(map.entrySet());
+ }
+ return entrySet;
}
@Override
public Collection<V> values() {
- return map.values();
+ if (values == null) {
+ values = Collections.unmodifiableCollection(map.values());
+ }
+ return values;
}
/**
* Unmodifiable wrapper for an iterator.
*/
- private class IteratorImpl implements Iterator<Entry<V>> {
- final Iterator<Entry<V>> iter;
+ private class IteratorImpl implements Iterator<PrimitiveEntry<V>> {
+ final Iterator<PrimitiveEntry<V>> iter;
- IteratorImpl(Iterator<Entry<V>> iter) {
+ IteratorImpl(Iterator<PrimitiveEntry<V>> iter) {
this.iter = iter;
}
@@ -216,7 +272,7 @@ public boolean hasNext() {
}
@Override
- public Entry<V> next() {
+ public PrimitiveEntry<V> next() {
if (!hasNext()) {
throw new NoSuchElementException();
}
@@ -232,10 +288,10 @@ public void remove() {
/**
* Unmodifiable wrapper for an entry.
*/
- private class EntryImpl implements Entry<V> {
- final Entry<V> entry;
+ private class EntryImpl implements PrimitiveEntry<V> {
+ final PrimitiveEntry<V> entry;
- EntryImpl(Entry<V> entry) {
+ EntryImpl(PrimitiveEntry<V> entry) {
this.entry = entry;
}
diff --git a/common/src/main/script/codegen.groovy b/common/src/main/script/codegen.groovy
index 0fd512a4f5c..4ae4636e3d9 100644
--- a/common/src/main/script/codegen.groovy
+++ b/common/src/main/script/codegen.groovy
@@ -9,17 +9,19 @@ templateDirs.eachWithIndex { templateDir, i ->
void convertSources(String templateDir, String outputDir) {
String[] keyPrimitives = ["byte", "char", "short", "int", "long"]
- String[] keyObjects = ["Byte", "Character", "Short", "Integer", "Long"];
+ String[] keyObjects = ["Byte", "Character", "Short", "Integer", "Long"]
+ String[] keyNumberMethod = ["byteValue", "charValue", "shortValue", "intValue", "longValue"]
keyPrimitives.eachWithIndex { keyPrimitive, i ->
- convertTemplates templateDir, outputDir, keyPrimitive, keyObjects[i]
+ convertTemplates templateDir, outputDir, keyPrimitive, keyObjects[i], keyNumberMethod[i]
}
}
void convertTemplates(String templateDir,
String outputDir,
String keyPrimitive,
- String keyObject) {
+ String keyObject,
+ String keyNumberMethod) {
def keyName = keyPrimitive.capitalize()
def replaceFrom = "(^.*)K([^.]+)\\.template\$"
def replaceTo = "\\1" + keyName + "\\2.java"
@@ -32,6 +34,7 @@ void convertTemplates(String templateDir,
filter(token: "K", value: keyName)
filter(token: "k", value: keyPrimitive)
filter(token: "O", value: keyObject)
+ filter(token: "KEY_NUMBER_METHOD", value: keyNumberMethod)
filter(token: "HASH_CODE", value: hashCodeFn)
}
regexpmapper(from: replaceFrom, to: replaceTo)
diff --git a/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template b/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template
index 45cb03f8caf..9636e321405 100644
--- a/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template
+++ b/common/src/main/templates/io/netty/util/collection/KObjectHashMap.template
@@ -17,12 +17,14 @@ package io.netty.util.collection;
import static io.netty.util.internal.MathUtil.findNextPositivePowerOfTwo;
-import java.lang.reflect.Array;
import java.util.AbstractCollection;
+import java.util.AbstractSet;
import java.util.Arrays;
import java.util.Collection;
import java.util.Iterator;
+import java.util.Map;
import java.util.NoSuchElementException;
+import java.util.Set;
/**
* A hash map implementation of {@link @K@ObjectMap} that uses open addressing for keys.
@@ -32,7 +34,7 @@ import java.util.NoSuchElementException;
*
* @param <V> The value type stored in the map.
*/
-public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectMap.Entry<V>> {
+public class @K@ObjectHashMap<V> implements @K@ObjectMap<V> {
/** Default initial capacity. Used if not specified in the constructor */
public static final int DEFAULT_CAPACITY = 8;
@@ -57,6 +59,15 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
private int size;
private int mask;
+ private final Set<@O@> keySet = new KeySet();
+ private final Set<Entry<@O@, V>> entrySet = new EntrySet();
+ private final Iterable<PrimitiveEntry<V>> entries = new Iterable<PrimitiveEntry<V>>() {
+ @Override
+ public Iterator<PrimitiveEntry<V>> iterator() {
+ return new PrimitiveIterator();
+ }
+ };
+
public @K@ObjectHashMap() {
this(DEFAULT_CAPACITY, DEFAULT_LOAD_FACTOR);
}
@@ -139,9 +150,10 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
}
@Override
- public void putAll(@K@ObjectMap<V> sourceMap) {
- if (sourceMap instanceof IntObjectHashMap) {
+ public void putAll(Map<? extends @O@, ? extends V> sourceMap) {
+ if (sourceMap instanceof @K@ObjectHashMap) {
// Optimization - iterate through the arrays.
+ @SuppressWarnings("unchecked")
@K@ObjectHashMap<V> source = (@K@ObjectHashMap<V>) sourceMap;
for (int i = 0; i < source.values.length; ++i) {
V sourceValue = source.values[i];
@@ -153,8 +165,8 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
}
// Otherwise, just add each entry.
- for (Entry<V> entry : sourceMap.entries()) {
- put(entry.key(), entry.value());
+ for (Entry<? extends @O@, ? extends V> entry : sourceMap.entrySet()) {
+ put(entry.getKey(), entry.getValue());
}
}
@@ -193,8 +205,9 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
}
@Override
- public boolean containsValue(V value) {
- V v1 = toInternal(value);
+ public boolean containsValue(Object value) {
+ @SuppressWarnings("unchecked")
+ V v1 = toInternal((V) value);
for (V v2 : values) {
// The map supports null values; this will be matched as NULL_VALUE.equals(NULL_VALUE).
if (v2 != null && v2.equals(v1)) {
@@ -205,38 +218,8 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
}
@Override
- public Iterable<Entry<V>> entries() {
- return this;
- }
-
- @Override
- public Iterator<Entry<V>> iterator() {
- return new IteratorImpl();
- }
-
- @Override
- public @k@[] keys() {
- @k@[] outKeys = new @k@[size()];
- int targetIx = 0;
- for (int i = 0; i < values.length; ++i) {
- if (values[i] != null) {
- outKeys[targetIx++] = keys[i];
- }
- }
- return outKeys;
- }
-
- @Override
- public V[] values(Class<V> clazz) {
- @SuppressWarnings("unchecked")
- V[] outValues = (V[]) Array.newInstance(clazz, size());
- int targetIx = 0;
- for (V value : values) {
- if (value != null) {
- outValues[targetIx++] = value;
- }
- }
- return outValues;
+ public Iterable<PrimitiveEntry<V>> entries() {
+ return entries;
}
@Override
@@ -245,7 +228,8 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
@Override
public Iterator<V> iterator() {
return new Iterator<V>() {
- final Iterator<Entry<V>> iter = @K@ObjectHashMap.this.iterator();
+ final PrimitiveIterator iter = new PrimitiveIterator();
+
@Override
public boolean hasNext() {
return iter.hasNext();
@@ -319,6 +303,40 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
return true;
}
+ @Override
+ public boolean containsKey(Object key) {
+ return containsKey(objectToKey(key));
+ }
+
+ @Override
+ public V get(Object key) {
+ return get(objectToKey(key));
+ }
+
+ @Override
+ public V put(@O@ key, V value) {
+ return put(objectToKey(key), value);
+ }
+
+ @Override
+ public V remove(Object key) {
+ return remove(objectToKey(key));
+ }
+
+ @Override
+ public Set<@O@> keySet() {
+ return keySet;
+ }
+
+ @Override
+ public Set<Entry<@O@, V>> entrySet() {
+ return entrySet;
+ }
+
+ private @k@ objectToKey(Object key) {
+ return (@k@) ((@O@) key).@KEY_NUMBER_METHOD@();
+ }
+
/**
* Locates the index for the given key. This method probes using double hashing.
*
@@ -447,7 +465,7 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
for (;;) {
if (values[index] == null) {
keys[index] = oldKey;
- values[index] = toInternal(oldVal);
+ values[index] = oldVal;
break;
}
@@ -458,10 +476,115 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
}
}
+ @Override
+ public String toString() {
+ if (isEmpty()) {
+ return "{}";
+ }
+ StringBuilder sb = new StringBuilder(4 * size);
+ sb.append('{');
+ boolean first = true;
+ for (int i = 0; i < values.length; ++i) {
+ V value = values[i];
+ if (value != null) {
+ if (!first) {
+ sb.append(", ");
+ }
+ sb.append(keyToString(keys[i])).append('=').append(value == this ? "(this Map)" :
+ toExternal(value));
+ first = false;
+ }
+ }
+ return sb.append('}').toString();
+ }
+
+ /**
+ * Helper method called by {@link #toString()} in order to convert a single map key into a string.
+ * This is protected to allow subclasses to override the appearance of a given key.
+ */
+ protected String keyToString(@k@ key) {
+ return @O@.toString(key);
+ }
+
+ /**
+ * Set implementation for iterating over the entries of the map.
+ */
+ private final class EntrySet extends AbstractSet<Entry<@O@, V>> {
+ @Override
+ public Iterator<Entry<@O@, V>> iterator() {
+ return new MapIterator();
+ }
+
+ @Override
+ public int size() {
+ return @K@ObjectHashMap.this.size();
+ }
+ }
+
/**
- * Iterator for traversing the entries in this map.
+ * Set implementation for iterating over the keys.
*/
- private final class IteratorImpl implements Iterator<Entry<V>>, Entry<V> {
+ private final class KeySet extends AbstractSet<@O@> {
+ @Override
+ public int size() {
+ return @K@ObjectHashMap.this.size();
+ }
+
+ @Override
+ public boolean contains(Object o) {
+ return @K@ObjectHashMap.this.containsKey(o);
+ }
+
+ @Override
+ public boolean remove(Object o) {
+ return @K@ObjectHashMap.this.remove(o) != null;
+ }
+
+ @Override
+ public boolean retainAll(Collection<?> retainedKeys) {
+ boolean changed = false;
+ for(Iterator<PrimitiveEntry<V>> iter = entries().iterator(); iter.hasNext(); ) {
+ PrimitiveEntry<V> entry = iter.next();
+ if (!retainedKeys.contains(entry.key())) {
+ changed = true;
+ iter.remove();
+ }
+ }
+ return changed;
+ }
+
+ @Override
+ public void clear() {
+ @K@ObjectHashMap.this.clear();
+ }
+
+ @Override
+ public Iterator<@O@> iterator() {
+ return new Iterator<@O@>() {
+ private final Iterator<Entry<@O@, V>> iter = entrySet.iterator();
+
+ @Override
+ public boolean hasNext() {
+ return iter.hasNext();
+ }
+
+ @Override
+ public @O@ next() {
+ return iter.next().getKey();
+ }
+
+ @Override
+ public void remove() {
+ iter.remove();
+ }
+ };
+ }
+ }
+
+ /**
+ * Iterator over primitive entries. Entry key/values are overwritten by each call to {@link #next()}.
+ */
+ private final class PrimitiveIterator implements Iterator<PrimitiveEntry<V>>, PrimitiveEntry<V> {
private int prevIndex = -1;
private int nextIndex = -1;
private int entryIndex = -1;
@@ -483,7 +606,7 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
}
@Override
- public Entry<V> next() {
+ public PrimitiveEntry<V> next() {
if (!hasNext()) {
throw new NoSuchElementException();
}
@@ -524,26 +647,68 @@ public class @K@ObjectHashMap<V> implements @K@ObjectMap<V>, Iterable<@K@ObjectM
}
}
- @Override
- public String toString() {
- if (size == 0) {
- return "{}";
+ /**
+ * Iterator used by the {@link Map} interface.
+ */
+ private final class MapIterator implements Iterator<Entry<@O@, V>> {
+ private final PrimitiveIterator iter = new PrimitiveIterator();
+
+ @Override
+ public boolean hasNext() {
+ return iter.hasNext();
}
- StringBuilder sb = new StringBuilder(4 * size);
- for (int i = 0; i < values.length; ++i) {
- V value = values[i];
- if (value != null) {
- sb.append(sb.length() == 0 ? "{" : ", ");
- sb.append(keyToString(keys[i])).append('=').append(value == this ? "(this Map)" : value);
+
+ @Override
+ public Entry<@O@, V> next() {
+ if (!hasNext()) {
+ throw new NoSuchElementException();
}
+
+ iter.next();
+
+ return new MapEntry(iter.entryIndex);
+ }
+
+ @Override
+ public void remove() {
+ iter.remove();
}
- return sb.append('}').toString();
}
/**
- * Helper method called by {@link #toString()} in order to convert a single map key into a string.
+ * A single entry in the map.
*/
- protected String keyToString(@k@ key) {
- return @O@.toString(key);
+ final class MapEntry implements Entry<@O@, V> {
+ private final int entryIndex;
+
+ MapEntry(int entryIndex) {
+ this.entryIndex = entryIndex;
+ }
+
+ @Override
+ public @O@ getKey() {
+ verifyExists();
+ return keys[entryIndex];
+ }
+
+ @Override
+ public V getValue() {
+ verifyExists();
+ return toExternal(values[entryIndex]);
+ }
+
+ @Override
+ public V setValue(V value) {
+ verifyExists();
+ V prevValue = toExternal(values[entryIndex]);
+ values[entryIndex] = toInternal(value);
+ return prevValue;
+ }
+
+ private void verifyExists() {
+ if (values[entryIndex] == null) {
+ throw new IllegalStateException("The map entry has been removed");
+ }
+ }
}
}
diff --git a/common/src/main/templates/io/netty/util/collection/KObjectMap.template b/common/src/main/templates/io/netty/util/collection/KObjectMap.template
index 3d79c81facb..19466daf8e2 100644
--- a/common/src/main/templates/io/netty/util/collection/KObjectMap.template
+++ b/common/src/main/templates/io/netty/util/collection/KObjectMap.template
@@ -14,21 +14,21 @@
*/
package io.netty.util.collection;
-import java.util.Collection;
+import java.util.Map;
/**
* Interface for a primitive map that uses {@code @k@}s as keys.
*
* @param <V> the value type stored in the map.
*/
-public interface @K@ObjectMap<V> {
+public interface @K@ObjectMap<V> extends Map<@O@, V> {
/**
- * An Entry in the map.
+ * A primitive entry in the map, provided by the iterator from {@link #entries()}
*
* @param <V> the value type stored in the map.
*/
- interface Entry<V> {
+ interface PrimitiveEntry<V> {
/**
* Gets the key for this entry.
*/
@@ -62,11 +62,6 @@ public interface @K@ObjectMap<V> {
*/
V put(@k@ key, V value);
- /**
- * Puts all of the entries from the given map into this map.
- */
- void putAll(@K@ObjectMap<V> sourceMap);
-
/**
* Removes the entry with the specified key.
*
@@ -76,48 +71,14 @@ public interface @K@ObjectMap<V> {
V remove(@k@ key);
/**
- * Returns the number of entries contained in this map.
- */
- int size();
-
- /**
- * Indicates whether or not this map is empty (i.e {@link #size()} == {@code 0]).
-
+ * Gets an iterable to traverse over the primitive entries contained in this map. As an optimization,
+ * the {@link PrimitiveEntry}s returned by the {@link Iterator} may change as the {@link Iterator}
+ * progresses. The caller should not rely on {@link PrimitiveEntry} key/value stability.
*/
- boolean isEmpty();
-
- /**
- * Clears all entries from this map.
- */
- void clear();
+ Iterable<PrimitiveEntry<V>> entries();
/**
* Indicates whether or not this map contains a value for the specified key.
*/
boolean containsKey(@k@ key);
-
- /**
- * Indicates whether or not the map contains the specified value.
- */
- boolean containsValue(V value);
-
- /**
- * Gets an iterable collection of the entries contained in this map.
- */
- Iterable<Entry<V>> entries();
-
- /**
- * Gets the keys contained in this map.
- */
- @k@[] keys();
-
- /**
- * Gets the values contained in this map.
- */
- V[] values(Class<V> clazz);
-
- /**
- * Gets the values contatins in this map as a {@link Collection}.
- */
- Collection<V> values();
}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java
index e01709e545c..0c0fa0f3e0c 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoop.java
@@ -28,6 +28,7 @@
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
+import java.util.Map;
import java.util.Queue;
import java.util.concurrent.Executor;
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
@@ -291,8 +292,8 @@ private void closeAll() {
}
Collection<AbstractEpollChannel> array = new ArrayList<AbstractEpollChannel>(channels.size());
- for (IntObjectMap.Entry<AbstractEpollChannel> entry: channels.entries()) {
- array.add(entry.value());
+ for (AbstractEpollChannel channel: channels.values()) {
+ array.add(channel);
}
for (AbstractEpollChannel ch: array) {
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
index 327c7c820c2..963c43f9e8f 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
@@ -46,6 +46,7 @@
import io.netty.util.concurrent.EventExecutor;
import java.util.Arrays;
+import java.util.Collections;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
@@ -896,10 +897,10 @@ public void subTreeBytesShouldBeCorrect() throws Http2Exception {
// Send a bunch of data on each stream.
final IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
- streamSizes.put(STREAM_A, 400);
- streamSizes.put(STREAM_B, 500);
- streamSizes.put(STREAM_C, 600);
- streamSizes.put(STREAM_D, 700);
+ streamSizes.put(STREAM_A, (Integer) 400);
+ streamSizes.put(STREAM_B, (Integer) 500);
+ streamSizes.put(STREAM_C, (Integer) 600);
+ streamSizes.put(STREAM_D, (Integer) 700);
FakeFlowControlled dataA = new FakeFlowControlled(streamSizes.get(STREAM_A));
FakeFlowControlled dataB = new FakeFlowControlled(streamSizes.get(STREAM_B));
@@ -922,11 +923,11 @@ public void subTreeBytesShouldBeCorrect() throws Http2Exception {
streamableBytesForTree(stream0));
assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_A, STREAM_C, STREAM_D)),
streamableBytesForTree(streamA));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_B)),
streamableBytesForTree(streamB));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_C)),
streamableBytesForTree(streamC));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_D)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_D)),
streamableBytesForTree(streamD));
}
@@ -966,10 +967,10 @@ public void subTreeBytesShouldBeCorrectWithRestructure() throws Http2Exception {
// Send a bunch of data on each stream.
final IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
- streamSizes.put(STREAM_A, 400);
- streamSizes.put(STREAM_B, 500);
- streamSizes.put(STREAM_C, 600);
- streamSizes.put(STREAM_D, 700);
+ streamSizes.put(STREAM_A, (Integer) 400);
+ streamSizes.put(STREAM_B, (Integer) 500);
+ streamSizes.put(STREAM_C, (Integer) 600);
+ streamSizes.put(STREAM_D, (Integer) 700);
FakeFlowControlled dataA = new FakeFlowControlled(streamSizes.get(STREAM_A));
FakeFlowControlled dataB = new FakeFlowControlled(streamSizes.get(STREAM_B));
@@ -996,9 +997,9 @@ public void subTreeBytesShouldBeCorrectWithRestructure() throws Http2Exception {
streamableBytesForTree(streamA));
assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B, STREAM_C, STREAM_D)),
streamableBytesForTree(streamB));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_C)),
streamableBytesForTree(streamC));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_D)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_D)),
streamableBytesForTree(streamD));
}
@@ -1041,11 +1042,11 @@ public void subTreeBytesShouldBeCorrectWithAddition() throws Http2Exception {
// Send a bunch of data on each stream.
final IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
- streamSizes.put(STREAM_A, 400);
- streamSizes.put(STREAM_B, 500);
- streamSizes.put(STREAM_C, 600);
- streamSizes.put(STREAM_D, 700);
- streamSizes.put(STREAM_E, 900);
+ streamSizes.put(STREAM_A, (Integer) 400);
+ streamSizes.put(STREAM_B, (Integer) 500);
+ streamSizes.put(STREAM_C, (Integer) 600);
+ streamSizes.put(STREAM_D, (Integer) 700);
+ streamSizes.put(STREAM_E, (Integer) 900);
FakeFlowControlled dataA = new FakeFlowControlled(streamSizes.get(STREAM_A));
FakeFlowControlled dataB = new FakeFlowControlled(streamSizes.get(STREAM_B));
@@ -1072,11 +1073,11 @@ public void subTreeBytesShouldBeCorrectWithAddition() throws Http2Exception {
assertEquals(calculateStreamSizeSum(streamSizes,
Arrays.asList(STREAM_A, STREAM_E, STREAM_C, STREAM_D)),
streamableBytesForTree(streamA));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_B)),
streamableBytesForTree(streamB));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_C)),
streamableBytesForTree(streamC));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_D)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_D)),
streamableBytesForTree(streamD));
assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_E, STREAM_C, STREAM_D)),
streamableBytesForTree(streamE));
@@ -1106,10 +1107,10 @@ public void subTreeBytesShouldBeCorrectWithInternalStreamClose() throws Http2Exc
// Send a bunch of data on each stream.
final IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
- streamSizes.put(STREAM_A, 400);
- streamSizes.put(STREAM_B, 500);
- streamSizes.put(STREAM_C, 600);
- streamSizes.put(STREAM_D, 700);
+ streamSizes.put(STREAM_A, (Integer) 400);
+ streamSizes.put(STREAM_B, (Integer) 500);
+ streamSizes.put(STREAM_C, (Integer) 600);
+ streamSizes.put(STREAM_D, (Integer) 700);
FakeFlowControlled dataA = new FakeFlowControlled(streamSizes.get(STREAM_A));
FakeFlowControlled dataB = new FakeFlowControlled(streamSizes.get(STREAM_B));
@@ -1133,11 +1134,11 @@ public void subTreeBytesShouldBeCorrectWithInternalStreamClose() throws Http2Exc
streamableBytesForTree(stream0));
assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C, STREAM_D)),
streamableBytesForTree(streamA));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_B)),
streamableBytesForTree(streamB));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_C)),
streamableBytesForTree(streamC));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_D)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_D)),
streamableBytesForTree(streamD));
}
@@ -1174,10 +1175,10 @@ public void subTreeBytesShouldBeCorrectWithLeafStreamClose() throws Http2Excepti
// Send a bunch of data on each stream.
final IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
- streamSizes.put(STREAM_A, 400);
- streamSizes.put(STREAM_B, 500);
- streamSizes.put(STREAM_C, 600);
- streamSizes.put(STREAM_D, 700);
+ streamSizes.put(STREAM_A, (Integer) 400);
+ streamSizes.put(STREAM_B, (Integer) 500);
+ streamSizes.put(STREAM_C, (Integer) 600);
+ streamSizes.put(STREAM_D, (Integer) 700);
FakeFlowControlled dataA = new FakeFlowControlled(streamSizes.get(STREAM_A));
FakeFlowControlled dataB = new FakeFlowControlled(streamSizes.get(STREAM_B));
@@ -1201,10 +1202,10 @@ public void subTreeBytesShouldBeCorrectWithLeafStreamClose() throws Http2Excepti
streamableBytesForTree(stream0));
assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_A, STREAM_D)),
streamableBytesForTree(streamA));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_B)),
streamableBytesForTree(streamB));
assertEquals(0, streamableBytesForTree(streamC));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_D)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Collections.singletonList(STREAM_D)),
streamableBytesForTree(streamD));
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java
index aa9f1aba9b4..cb33ca136e0 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java
@@ -65,14 +65,14 @@ public void standardSettingsShouldBeSet() {
@Test
public void nonStandardSettingsShouldBeSet() {
char key = 0;
- settings.put(key, 123L);
+ settings.put(key, (Long) 123L);
assertEquals(123L, (long) settings.get(key));
}
@Test
public void settingsShouldSupportUnsignedShort() {
char key = (char) (Short.MAX_VALUE + 1);
- settings.put(key, 123L);
+ settings.put(key, (Long) 123L);
assertEquals(123L, (long) settings.get(key));
}
diff --git a/common/src/test/templates/io.netty.util.collection/KObjectHashMapTest.template b/common/src/test/templates/io.netty.util.collection/KObjectHashMapTest.template
index 5969e0a1b71..ed87c489f50 100644
--- a/common/src/test/templates/io.netty.util.collection/KObjectHashMapTest.template
+++ b/common/src/test/templates/io.netty.util.collection/KObjectHashMapTest.template
@@ -18,9 +18,10 @@ import org.junit.Before;
import org.junit.Test;
import java.util.Arrays;
-import java.util.Collection;
import java.util.HashMap;
import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
import java.util.Random;
import java.util.Set;
@@ -87,6 +88,17 @@ public class @K@ObjectHashMapTest {
assertEquals(v, map.get(key));
}
+ @Test
+ public void putNewMappingShouldSucceed_mapApi() {
+ Value v = new Value("v");
+ @O@ key = (@O@)(@k@) 1;
+ assertNull(map.put(key, v));
+ assertEquals(1, map.size());
+ assertTrue(map.containsKey(key));
+ assertTrue(map.containsValue(v));
+ assertEquals(v, map.get(key));
+ }
+
@Test
public void putShouldReplaceValue() {
Value v1 = new Value("v1");
@@ -103,6 +115,22 @@ public class @K@ObjectHashMapTest {
assertEquals(v2, map.get(key));
}
+ @Test
+ public void putShouldReplaceValue_mapApi() {
+ Value v1 = new Value("v1");
+ @O@ key = (@O@)(@k@) 1;
+ assertNull(map.put(key, v1));
+
+ // Replace the value.
+ Value v2 = new Value("v2");
+ assertSame(v1, map.put(key, v2));
+
+ assertEquals(1, map.size());
+ assertTrue(map.containsKey(key));
+ assertTrue(map.containsValue(v2));
+ assertEquals(v2, map.get(key));
+ }
+
@Test
public void putShouldGrowMap() {
for (@k@ key = 0; key < (@k@) 255; ++key) {
@@ -115,6 +143,19 @@ public class @K@ObjectHashMapTest {
}
}
+ @Test
+ public void putShouldGrowMap_mapApi() {
+ for (@k@ key = 0; key < (@k@) 255; ++key) {
+ @O@ okey = (@O@) key;
+ Value v = new Value(@O@.toString(key));
+ assertNull(map.put(okey, v));
+ assertEquals(key + 1, map.size());
+ assertTrue(map.containsKey(okey));
+ assertTrue(map.containsValue(v));
+ assertEquals(v, map.get(okey));
+ }
+ }
+
@Test
public void negativeKeyShouldSucceed() {
Value v = new Value("v");
@@ -123,12 +164,26 @@ public class @K@ObjectHashMapTest {
assertEquals(v, map.get((@k@) -3));
}
+ @Test
+ public void negativeKeyShouldSucceed_mapApi() {
+ Value v = new Value("v");
+ map.put((@O@)(@k@) -3, v);
+ assertEquals(1, map.size());
+ assertEquals(v, map.get((@O@)(@k@) -3));
+ }
+
@Test
public void removeMissingValueShouldReturnNull() {
assertNull(map.remove((@k@) 1));
assertEquals(0, map.size());
}
+ @Test
+ public void removeMissingValueShouldReturnNull_mapApi() {
+ assertNull(map.remove((@O@)(@k@) 1));
+ assertEquals(0, map.size());
+ }
+
@Test
public void removeShouldReturnPreviousValue() {
Value v = new Value("v");
@@ -137,6 +192,14 @@ public class @K@ObjectHashMapTest {
assertSame(v, map.remove(key));
}
+ @Test
+ public void removeShouldReturnPreviousValue_mapApi() {
+ Value v = new Value("v");
+ @O@ key = (@O@)(@k@) 1;
+ map.put(key, v);
+ assertSame(v, map.remove(key));
+ }
+
/**
* This test is a bit internal-centric. We're just forcing a rehash to occur based on no longer
* having any FREE slots available. We do this by adding and then removing several keys up to
@@ -160,24 +223,84 @@ public class @K@ObjectHashMapTest {
assertSame(v, map.get(key));
}
+ @Test
+ public void noFreeSlotsShouldRehash_mapApi() {
+ for (@k@ i = 0; i < 10; ++i) {
+ map.put(i, new Value(@O@.toString(i)));
+ // Now mark it as REMOVED so that size won't cause the rehash.
+ map.remove((@O@) i);
+ assertEquals(0, map.size());
+ }
+
+ // Now add an entry to force the rehash since no FREE slots are available in the map.
+ Value v = new Value("v");
+ @O@ key = (@O@)(@k@) 1;
+ map.put(key, v);
+ assertEquals(1, map.size());
+ assertSame(v, map.get(key));
+ }
+
@Test
public void putAllShouldSucceed() {
+ @K@ObjectHashMap<Value> other = new @K@ObjectHashMap<Value>();
+
@k@ k1 = 1;
@k@ k2 = 2;
@k@ k3 = 3;
Value v1 = new Value("v1");
Value v2 = new Value("v2");
Value v3 = new Value("v3");
- map.put(k1, v1);
- map.put(k2, v2);
- map.put(k3, v3);
+ other.put(k1, v1);
+ other.put(k2, v2);
+ other.put(k3, v3);
+
+ map.putAll(other);
+ assertEquals(3, map.size());
+ assertSame(v1, map.get(k1));
+ assertSame(v2, map.get(k2));
+ assertSame(v3, map.get(k3));
+ }
- @K@ObjectHashMap<Value> map2 = new @K@ObjectHashMap<Value>();
- map2.putAll(map);
- assertEquals(3, map2.size());
- assertSame(v1, map2.get(k1));
- assertSame(v2, map2.get(k2));
- assertSame(v3, map2.get(k3));
+ @Test
+ public void putAllShouldSucceed_mapApi() {
+ @K@ObjectHashMap<Value> other = new @K@ObjectHashMap<Value>();
+
+ @O@ k1 = (@O@)(@k@) 1;
+ @O@ k2 = (@O@)(@k@) 2;
+ @O@ k3 = (@O@)(@k@) 3;
+ Value v1 = new Value("v1");
+ Value v2 = new Value("v2");
+ Value v3 = new Value("v3");
+ other.put(k1, v1);
+ other.put(k2, v2);
+ other.put(k3, v3);
+
+ map.putAll(other);
+ assertEquals(3, map.size());
+ assertSame(v1, map.get(k1));
+ assertSame(v2, map.get(k2));
+ assertSame(v3, map.get(k3));
+ }
+
+ @Test
+ public void putAllWithJavaMapShouldSucceed_mapApi() {
+ Map<@O@, Value> other = new HashMap<@O@, Value>();
+
+ @O@ k1 = (@O@)(@k@) 1;
+ @O@ k2 = (@O@)(@k@) 2;
+ @O@ k3 = (@O@)(@k@) 3;
+ Value v1 = new Value("v1");
+ Value v2 = new Value("v2");
+ Value v3 = new Value("v3");
+ other.put(k1, v1);
+ other.put(k2, v2);
+ other.put(k3, v3);
+
+ map.putAll(other);
+ assertEquals(3, map.size());
+ assertSame(v1, map.get(k1));
+ assertSame(v2, map.get(k2));
+ assertSame(v3, map.get(k3));
}
@Test
@@ -201,6 +324,14 @@ public class @K@ObjectHashMapTest {
assertTrue(map.containsValue(null));
}
+ @Test
+ public void containsValueShouldFindNull_mapApi() {
+ map.put((@O@)(@k@) 1, new Value("v1"));
+ map.put((@O@)(@k@) 2, null);
+ map.put((@O@)(@k@) 3, new Value("v2"));
+ assertTrue(map.containsValue(null));
+ }
+
@Test
public void containsValueShouldFindInstance() {
Value v = new Value("v1");
@@ -210,6 +341,15 @@ public class @K@ObjectHashMapTest {
assertTrue(map.containsValue(v));
}
+ @Test
+ public void containsValueShouldFindInstance_mapApi() {
+ Value v = new Value("v1");
+ map.put((@O@)(@k@) 1, new Value("v2"));
+ map.put((@O@)(@k@) 2, new Value("v3"));
+ map.put((@O@)(@k@) 3, v);
+ assertTrue(map.containsValue(v));
+ }
+
@Test
public void containsValueShouldFindEquivalentValue() {
map.put((@k@) 1, new Value("v1"));
@@ -218,6 +358,14 @@ public class @K@ObjectHashMapTest {
assertTrue(map.containsValue(new Value("v2")));
}
+ @Test
+ public void containsValueShouldFindEquivalentValue_mapApi() {
+ map.put((@O@)(@k@) 1, new Value("v1"));
+ map.put((@O@)(@k@) 2, new Value("v2"));
+ map.put((@O@)(@k@) 3, new Value("v3"));
+ assertTrue(map.containsValue(new Value("v2")));
+ }
+
@Test
public void containsValueNotFindMissingValue() {
map.put((@k@) 1, new Value("v1"));
@@ -226,6 +374,14 @@ public class @K@ObjectHashMapTest {
assertFalse(map.containsValue(new Value("v4")));
}
+ @Test
+ public void containsValueNotFindMissingValue_mapApi() {
+ map.put((@O@)(@k@) 1, new Value("v1"));
+ map.put((@O@)(@k@) 2, new Value("v2"));
+ map.put((@O@)(@k@) 3, new Value("v3"));
+ assertFalse(map.containsValue(new Value("v4")));
+ }
+
@Test
public void iteratorShouldTraverseEntries() {
@k@ k1 = 1;
@@ -241,8 +397,8 @@ public class @K@ObjectHashMapTest {
map.remove(k4);
Set<@O@> found = new HashSet<@O@>();
- for (@K@ObjectMap.Entry<Value> entry : map.entries()) {
- assertTrue(found.add(entry.key()));
+ for (@K@ObjectMap.Entry<@O@, Value> entry : map.entrySet()) {
+ assertTrue(found.add(entry.getKey()));
}
assertEquals(3, found.size());
assertTrue(found.contains(k1));
@@ -264,8 +420,8 @@ public class @K@ObjectHashMapTest {
map.put(k4, new Value("v4"));
map.remove(k4);
- @k@[] keys = map.keys();
- assertEquals(3, keys.length);
+ Set<@O@> keys = map.keySet();
+ assertEquals(3, keys.size());
Set<@O@> expected = new HashSet<@O@>();
expected.add(k1);
@@ -298,25 +454,11 @@ public class @K@ObjectHashMapTest {
// Ensure values() return all values.
Set<Value> expected = new HashSet<Value>();
- Set<Value> actual = new HashSet<Value>();
-
expected.add(v1);
expected.add(v2);
expected.add(v3);
- Value[] valueArray = map.values(Value.class);
- assertEquals(3, valueArray.length);
- for (Value value : valueArray) {
- assertTrue(actual.add(value));
- }
- assertEquals(expected, actual);
- actual.clear();
-
- Collection<Value> valueCollection = map.values();
- assertEquals(3, valueCollection.size());
- for (Value value : valueCollection) {
- assertTrue(actual.add(value));
- }
+ Set<Value> actual = new HashSet<Value>(map.values());
assertEquals(expected, actual);
}
@@ -332,6 +474,18 @@ public class @K@ObjectHashMapTest {
}
}
+ @Test
+ public void mapShouldSupportHashingConflicts_mapApi() {
+ for (int mod = 0; mod < 10; ++mod) {
+ for (int sz = 1; sz <= 101; sz += 2) {
+ @K@ObjectHashMap<String> map = new @K@ObjectHashMap<String>(sz);
+ for (int i = 0; i < 100; ++i) {
+ map.put((@O@)(@k@)(i * mod), "");
+ }
+ }
+ }
+ }
+
@Test
public void hashcodeEqualsTest() {
@K@ObjectHashMap<@O@> map1 = new @K@ObjectHashMap<@O@>();
@@ -345,23 +499,27 @@ public class @K@ObjectHashMapTest {
assertEquals(map1.hashCode(), map2.hashCode());
assertEquals(map1, map2);
// Remove one "middle" element, maps should now be non-equals.
- @k@[] keys = map1.keys();
- map2.remove(keys[50]);
+ Set<@O@> keys = map1.keySet();
+ @O@ removed = null;
+ Iterator<@O@> iter = keys.iterator();
+ for (int ix = 0; iter.hasNext() && ix < 50; ++ix) {
+ removed = iter.next();
+ }
+ map2.remove(removed);
assertFalse(map1.equals(map2));
// Put it back; will likely be in a different position, but maps will be equal again.
- map2.put(keys[50], @O@.valueOf(map1.keys()[50]));
+ map2.put(removed, removed);
assertEquals(map1, map2);
assertEquals(map1.hashCode(), map2.hashCode());
// Make map2 have one extra element, will be non-equal.
- map2.put((@k@) 100, (@k@) 100);
+ map2.put((@k@) 100, (@O@)(@k@) 100);
assertFalse(map1.equals(map2));
// Rebuild map2 with elements in a different order, again the maps should be equal.
// (These tests with same elements in different order also show that the hashCode
// function does not depend on the internal ordering of entries.)
map2.clear();
- Arrays.sort(keys);
- for (@k@ key : keys) {
- map2.put(key, @O@.valueOf(key));
+ for (@O@ key : map1.keySet()) {
+ map2.put(key, key);
}
assertEquals(map1.hashCode(), map2.hashCode());
assertEquals(map1, map2);
@@ -424,14 +582,14 @@ public class @K@ObjectHashMapTest {
assertEquals(goodMap.size(), map.size());
@O@[] goodKeys = goodMap.keySet().toArray(new @O@[goodMap.size()]);
Arrays.sort(goodKeys);
- @k@ [] keys = map.keys();
+ @O@[] keys = map.keySet().toArray(new @O@[map.size()]);
Arrays.sort(keys);
for (int i = 0; i < goodKeys.length; ++i) {
- assertEquals((@k@) goodKeys[i], keys[i]);
+ assertEquals(goodKeys[i], keys[i]);
}
// Finally drain the map.
- for (@k@ key : map.keys()) {
+ for (@k@ key : keys) {
assertEquals(goodMap.remove(key), map.remove(key));
}
assertTrue(map.isEmpty());
| train | train | 2015-07-21T20:17:04 | 2015-07-10T18:20:05Z | buchgr | val |
netty/netty/3967_3991 | netty/netty | netty/netty/3967 | netty/netty/3991 | [
"timestamp(timedelta=17.0, similarity=0.9038943285140661)"
] | 88a2c6ef49b98e11ea8e3afafc3a448b99e3b845 | ddfb91a870c22fa724e1ef8d075ea443335c2af3 | [
"That's strange... let me check\n",
"@mingyu89 thanks for reporting.. I think I know what the problem is and I'm working on a fix.\n",
"@normanmaurer Thank you!\n"
] | [] | 2015-07-15T22:54:08Z | [
"defect"
] | NullPointerException in PendingWriteQueue | Netty Version: 4.0.27.final
Hi. When there're network outages, sometimes on server side we saw handler's exceptionCaught was called with a NullPointerException in PendingWriteQueue.
@trustin Is it a race condition between adding SslHandler and closing channel?
Thanks.
```
java.lang.NullPointerException
at io.netty.channel.PendingWriteQueue.recycle(PendingWriteQueue.java:248)
at io.netty.channel.PendingWriteQueue.removeAndWrite(PendingWriteQueue.java:196)
at io.netty.channel.PendingWriteQueue.removeAndWriteAll(PendingWriteQueue.java:150)
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:455)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:735)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:716)
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:735)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:716)
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:735)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:716)
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:735)
at io.netty.channel.AbstractChannelHandlerContext.access$1500(AbstractChannelHandlerContext.java:32)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1033)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:965)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
```
| [
"transport/src/main/java/io/netty/channel/PendingWriteQueue.java"
] | [
"transport/src/main/java/io/netty/channel/PendingWriteQueue.java"
] | [
"transport/src/test/java/io/netty/channel/PendingWriteQueueTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/PendingWriteQueue.java b/transport/src/main/java/io/netty/channel/PendingWriteQueue.java
index b99bef796ed..b0bb0b0f202 100644
--- a/transport/src/main/java/io/netty/channel/PendingWriteQueue.java
+++ b/transport/src/main/java/io/netty/channel/PendingWriteQueue.java
@@ -87,7 +87,12 @@ public void add(Object msg, ChannelPromise promise) {
tail = write;
}
size ++;
- buffer.incrementPendingOutboundBytes(write.size);
+ // We need to guard against null as channel.unsafe().outboundBuffer() may returned null
+ // if the channel was already closed when constructing the PendingWriteQueue.
+ // See https://github.com/netty/netty/issues/3967
+ if (buffer != null) {
+ buffer.incrementPendingOutboundBytes(write.size);
+ }
}
/**
@@ -245,7 +250,12 @@ private void recycle(PendingWrite write, boolean update) {
}
write.recycle();
- buffer.decrementPendingOutboundBytes(writeSize);
+ // We need to guard against null as channel.unsafe().outboundBuffer() may returned null
+ // if the channel was already closed when constructing the PendingWriteQueue.
+ // See https://github.com/netty/netty/issues/3967
+ if (buffer != null) {
+ buffer.decrementPendingOutboundBytes(writeSize);
+ }
}
private static void safeFail(ChannelPromise promise, Throwable cause) {
| diff --git a/transport/src/test/java/io/netty/channel/PendingWriteQueueTest.java b/transport/src/test/java/io/netty/channel/PendingWriteQueueTest.java
index db2d1e8d05f..a28fbba4d62 100644
--- a/transport/src/test/java/io/netty/channel/PendingWriteQueueTest.java
+++ b/transport/src/test/java/io/netty/channel/PendingWriteQueueTest.java
@@ -244,6 +244,21 @@ public void operationComplete(ChannelFuture future) throws Exception {
assertNull(channel.readInbound());
}
+ // See https://github.com/netty/netty/issues/3967
+ @Test
+ public void testCloseChannelOnCreation() {
+ EmbeddedChannel channel = new EmbeddedChannel(new ChannelInboundHandlerAdapter());
+ channel.close().syncUninterruptibly();
+
+ final PendingWriteQueue queue = new PendingWriteQueue(channel.pipeline().firstContext());
+
+ IllegalStateException ex = new IllegalStateException();
+ ChannelPromise promise = channel.newPromise();
+ queue.add(1L, promise);
+ queue.removeAndFailAll(ex);
+ assertSame(ex, promise.cause());
+ }
+
private static class TestHandler extends ChannelDuplexHandler {
protected PendingWriteQueue queue;
private int expectedSize;
| train | train | 2015-07-12T20:20:28 | 2015-07-09T20:10:41Z | mingyu89 | val |
netty/netty/3997_4003 | netty/netty | netty/netty/3997 | netty/netty/4003 | [
"timestamp(timedelta=32.0, similarity=0.8461993405517902)"
] | 36c80cd81819436cf7b40ae305f7db837aca30d7 | 955fb5ab3f44abe48889b37a196cf7b45b2d4af8 | [
"@justinjhendrick - Thanks for pointing this out. Also feel free to submit a PR and get credit for fixing :)\n",
"@Scottmitch, I would but my company wants to review any open source before we contribute. So, something small like this wouldn't be worth the time necessary for a review.\n",
"No worries I will fix... | [] | 2015-07-19T07:26:38Z | [
"documentation"
] | Documentation: wrong class mentioned in setter | {@link ByteBufAllocator} -> {@link MessageSizeEstimator} on
https://github.com/netty/netty/blob/4.0/transport/src/main/java/io/netty/channel/ChannelConfig.java#L248
this effects 4.0, 4.1, and master branches
| [
"transport/src/main/java/io/netty/channel/ChannelConfig.java"
] | [
"transport/src/main/java/io/netty/channel/ChannelConfig.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/ChannelConfig.java b/transport/src/main/java/io/netty/channel/ChannelConfig.java
index b7ec03c45e1..7871285cd0c 100644
--- a/transport/src/main/java/io/netty/channel/ChannelConfig.java
+++ b/transport/src/main/java/io/netty/channel/ChannelConfig.java
@@ -245,7 +245,7 @@ public interface ChannelConfig {
MessageSizeEstimator getMessageSizeEstimator();
/**
- * Set the {@link ByteBufAllocator} which is used for the channel
+ * Set the {@link MessageSizeEstimator} which is used for the channel
* to detect the size of a message.
*/
ChannelConfig setMessageSizeEstimator(MessageSizeEstimator estimator);
| null | val | train | 2015-07-19T16:42:37 | 2015-07-16T23:37:01Z | justinjhendrick | val |
netty/netty/4001_4011 | netty/netty | netty/netty/4001 | netty/netty/4011 | [
"timestamp(timedelta=64.0, similarity=0.8452350038604813)"
] | 9e8b1ea5879c867e0203522cb72bd2bd6c9bfc8f | 04f16eb9e896bd684ad197560c117afa8e7f71cd | [
"@nmittler - FYI :)\n\nhttps://github.com/netty/netty/pull/3984#discussion_r34911766\n"
] | [] | 2015-07-21T15:02:35Z | [] | PrimitiveCollections should apply for all "scripted" types | Currently the PrimitiveCollections class has an implementation of `EmptyMap` and `UnmodifiableMap` only for the `int` variant of the primitive map. These objects should be templatized like the other primitive map types.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java",
"common/src/main/java/io/netty/util/collection/package-info.java",
"common/src/main/java/io/netty/util/collection/PrimitiveCollections.ja... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java",
"common/src/main/templates/io/netty/util/collection/KCollections.template"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index c437282f62e..51b38c8b7eb 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -34,9 +34,9 @@
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.http2.Http2Stream.State;
+import io.netty.util.collection.IntCollections;
import io.netty.util.collection.IntObjectHashMap;
import io.netty.util.collection.IntObjectMap;
-import io.netty.util.collection.PrimitiveCollections;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.SystemPropertyUtil;
@@ -292,7 +292,7 @@ private class DefaultStream implements Http2Stream {
private State state;
private short weight = DEFAULT_PRIORITY_WEIGHT;
private DefaultStream parent;
- private IntObjectMap<DefaultStream> children = PrimitiveCollections.emptyIntObjectMap();
+ private IntObjectMap<DefaultStream> children = IntCollections.emptyMap();
private int totalChildWeights;
private int prioritizableForTree = 1;
private boolean resetSent;
@@ -539,7 +539,7 @@ private boolean isPrioritizable() {
}
private void initChildrenIfEmpty() {
- if (children == PrimitiveCollections.<DefaultStream>emptyIntObjectMap()) {
+ if (children == IntCollections.<DefaultStream>emptyMap()) {
initChildren();
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
index 20fc26baa29..628703b571f 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
@@ -59,7 +59,6 @@
import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.http2.Http2CodecUtil.SimpleChannelPromiseAggregator;
import io.netty.handler.codec.http2.Http2FrameWriter.Configuration;
-import io.netty.util.collection.CharObjectMap;
/**
* A {@link Http2FrameWriter} that supports all frame types defined by the HTTP/2 specification.
@@ -213,7 +212,7 @@ public ChannelFuture writeSettings(ChannelHandlerContext ctx, Http2Settings sett
int payloadLength = SETTING_ENTRY_LENGTH * settings.size();
ByteBuf buf = ctx.alloc().buffer(FRAME_HEADER_LENGTH + settings.size() * SETTING_ENTRY_LENGTH);
writeFrameHeaderInternal(buf, payloadLength, SETTINGS, new Http2Flags(), 0);
- for (CharObjectMap.PrimitiveEntry<Long> entry : settings.entries()) {
+ for (Http2Settings.PrimitiveEntry<Long> entry : settings.entries()) {
writeUnsignedShort(entry.key(), buf);
writeUnsignedInt(entry.value(), buf);
}
diff --git a/common/src/main/java/io/netty/util/collection/package-info.java b/common/src/main/java/io/netty/util/collection/package-info.java
deleted file mode 100644
index b5f6029edb4..00000000000
--- a/common/src/main/java/io/netty/util/collection/package-info.java
+++ /dev/null
@@ -1,20 +0,0 @@
-/*
- * Copyright 2014 The Netty Project
- *
- * The Netty Project licenses this file to you under the Apache License,
- * version 2.0 (the "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at:
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations
- * under the License.
- */
-
-/**
- * Utility classes for commonly used collections.
- */
-package io.netty.util.collection;
diff --git a/common/src/main/java/io/netty/util/collection/PrimitiveCollections.java b/common/src/main/templates/io/netty/util/collection/KCollections.template
similarity index 78%
rename from common/src/main/java/io/netty/util/collection/PrimitiveCollections.java
rename to common/src/main/templates/io/netty/util/collection/KCollections.template
index 4cf1f53623b..a5a5534e412 100644
--- a/common/src/main/java/io/netty/util/collection/PrimitiveCollections.java
+++ b/common/src/main/templates/io/netty/util/collection/KCollections.template
@@ -22,47 +22,46 @@
import java.util.Set;
/**
- * Utility methods for primitive collections.
+ * Utilities for @k@-based primitive collections.
*/
-public final class PrimitiveCollections {
+public final class @K@Collections {
- private static final IntObjectMap<Object> EMPTY_INT_OBJECT_MAP = new EmptyIntObjectMap();
+ private static final @K@ObjectMap<Object> EMPTY_MAP = new EmptyMap();
- private PrimitiveCollections() {
+ private @K@Collections() {
}
/**
- * Returns an unmodifiable empty {@link IntObjectMap}.
+ * Returns an unmodifiable empty {@link @K@ObjectMap}.
*/
@SuppressWarnings("unchecked")
- public static <V> IntObjectMap<V> emptyIntObjectMap() {
- return (IntObjectMap<V>) EMPTY_INT_OBJECT_MAP;
+ public static <V> @K@ObjectMap<V> emptyMap() {
+ return (@K@ObjectMap<V>) EMPTY_MAP;
}
/**
* Creates an unmodifiable wrapper around the given map.
*/
- public static <V> IntObjectMap<V> unmodifiableIntObjectMap(final IntObjectMap<V> map) {
- return new UnmodifiableIntObjectMap<V>(map);
+ public static <V> @K@ObjectMap<V> unmodifiableMap(final @K@ObjectMap<V> map) {
+ return new UnmodifiableMap<V>(map);
}
/**
* An empty map. All operations that attempt to modify the map are unsupported.
*/
- private static final class EmptyIntObjectMap implements IntObjectMap<Object> {
-
+ private static final class EmptyMap implements @K@ObjectMap<Object> {
@Override
- public Object get(int key) {
+ public Object get(@k@ key) {
return null;
}
@Override
- public Object put(int key, Object value) {
+ public Object put(@k@ key, Object value) {
throw new UnsupportedOperationException("put");
}
@Override
- public Object remove(int key) {
+ public Object remove(@k@ key) {
return null;
}
@@ -87,12 +86,12 @@ public void clear() {
}
@Override
- public Set<Integer> keySet() {
+ public Set<@O@> keySet() {
return Collections.emptySet();
}
@Override
- public boolean containsKey(int key) {
+ public boolean containsKey(@k@ key) {
return false;
}
@@ -112,7 +111,7 @@ public Object get(Object key) {
}
@Override
- public Object put(Integer key, Object value) {
+ public Object put(@O@ key, Object value) {
throw new UnsupportedOperationException();
}
@@ -122,7 +121,7 @@ public Object remove(Object key) {
}
@Override
- public void putAll(Map<? extends Integer, ?> m) {
+ public void putAll(Map<? extends @O@, ?> m) {
throw new UnsupportedOperationException();
}
@@ -132,39 +131,39 @@ public Collection<Object> values() {
}
@Override
- public Set<Entry<Integer, Object>> entrySet() {
+ public Set<Entry<@O@, Object>> entrySet() {
return Collections.emptySet();
}
}
/**
- * An unmodifiable wrapper around a {@link IntObjectMap}.
+ * An unmodifiable wrapper around a {@link @K@ObjectMap}.
*
* @param <V> the value type stored in the map.
*/
- private static final class UnmodifiableIntObjectMap<V> implements IntObjectMap<V> {
- private final IntObjectMap<V> map;
- private Set<Integer> keySet;
- private Set<Entry<Integer, V>> entrySet;
+ private static final class UnmodifiableMap<V> implements @K@ObjectMap<V> {
+ private final @K@ObjectMap<V> map;
+ private Set<@O@> keySet;
+ private Set<Entry<@O@, V>> entrySet;
private Collection<V> values;
private Iterable<PrimitiveEntry<V>> entries;
- UnmodifiableIntObjectMap(IntObjectMap<V> map) {
+ UnmodifiableMap(@K@ObjectMap<V> map) {
this.map = map;
}
@Override
- public V get(int key) {
+ public V get(@k@ key) {
return map.get(key);
}
@Override
- public V put(int key, V value) {
+ public V put(@k@ key, V value) {
throw new UnsupportedOperationException("put");
}
@Override
- public V remove(int key) {
+ public V remove(@k@ key) {
throw new UnsupportedOperationException("remove");
}
@@ -184,7 +183,7 @@ public void clear() {
}
@Override
- public boolean containsKey(int key) {
+ public boolean containsKey(@k@ key) {
return map.containsKey(key);
}
@@ -204,7 +203,7 @@ public V get(Object key) {
}
@Override
- public V put(Integer key, V value) {
+ public V put(@O@ key, V value) {
throw new UnsupportedOperationException("put");
}
@@ -214,7 +213,7 @@ public V remove(Object key) {
}
@Override
- public void putAll(Map<? extends Integer, ? extends V> m) {
+ public void putAll(Map<? extends @O@, ? extends V> m) {
throw new UnsupportedOperationException("putAll");
}
@@ -233,7 +232,7 @@ public Iterator<PrimitiveEntry<V>> iterator() {
}
@Override
- public Set<Integer> keySet() {
+ public Set<@O@> keySet() {
if (keySet == null) {
keySet = Collections.unmodifiableSet(map.keySet());
}
@@ -241,7 +240,7 @@ public Set<Integer> keySet() {
}
@Override
- public Set<Entry<Integer, V>> entrySet() {
+ public Set<Entry<@O@, V>> entrySet() {
if (entrySet == null) {
entrySet = Collections.unmodifiableSet(map.entrySet());
}
@@ -289,14 +288,14 @@ public void remove() {
* Unmodifiable wrapper for an entry.
*/
private class EntryImpl implements PrimitiveEntry<V> {
- final PrimitiveEntry<V> entry;
+ private final PrimitiveEntry<V> entry;
EntryImpl(PrimitiveEntry<V> entry) {
this.entry = entry;
}
@Override
- public int key() {
+ public @k@ key() {
return entry.key();
}
| null | val | train | 2015-07-23T00:52:01 | 2015-07-17T19:59:26Z | Scottmitch | val |
netty/netty/3988_4016 | netty/netty | netty/netty/3988 | netty/netty/4016 | [
"timestamp(timedelta=4790.0, similarity=0.8761306193745646)"
] | 5c7022d49449389f22d3387d6323b55870e28512 | 4a7c46488365c45066d5085e2b1c5ca14d535b7f | [
"@fratboy - Thanks for reporting!\n\n@normanmaurer - Assigned to you for now.\n",
"@fratboy thanks... fix in the works.\n",
"@normanmaurer My pleasure.\n",
"@fratboy could you verify https://github.com/netty/netty/pull/4016 ?\n",
"@normanmaurer It's exactly same with my patch in my project. It works perfect... | [] | 2015-07-21T16:48:02Z | [
"defect"
] | FixedChannelPool does not count acquired channels precisely | Netty version: 4.0.10.Beta5
Context:
When queued `AcquireTask` fails, `decrementAndRunTaskQueue()` method could throw assertion error.
Steps to reproduct:
1. Call `FixedChannelPool.acquire` enough to make some `AcquireTask`s be queued.
2. Close server channel abrubtly to make `AcquireTask` fail.
3. AssertionError can be produced.
``` java
assert acquiredChannelCount >= 0;
```
Opinion:
- Set a flag `acquired` in `AcquireListener` and set it true when it is dequed from `pendingAcquireQueue`.
- When acquire future is failed, if `acquired` flag is true call `decrementAndRunTaskQueue()` else call `runTaskQueue()`
``` java
private class AcquireListener implements FutureListener<Channel> {
private final Promise<Channel> originalPromise;
protected boolean acquired;
AcquireListener(Promise<Channel> originalPromise) {
this.originalPromise = originalPromise;
acquired = true;
}
@Override
public void operationComplete(Future<Channel> future) throws Exception {
assert executor.inEventLoop();
if (future.isSuccess()) {
originalPromise.setSuccess(future.getNow());
} else {
// Something went wrong try to run pending acquire tasks.
if(acquired) {
decrementAndRunTaskQueue();
} else {
runTaskQueue();
}
originalPromise.setFailure(future.cause());
}
}
}
private final class AcquireTask extends AcquireListener {
final Promise<Channel> promise;
final long expireNanoTime = System.nanoTime() + acquireTimeoutNanos;
ScheduledFuture<?> timeoutFuture;
public AcquireTask(Promise<Channel> promise) {
super(promise);
// We need to create a new promise as we need to ensure the AcquireListener runs in the correct
// EventLoop.
this.promise = executor.<Channel>newPromise().addListener(this);
acquired = false;
}
}
private void runTaskQueue() {
while (acquiredChannelCount < maxConnections) {
AcquireTask task = pendingAcquireQueue.poll();
if (task == null) {
break;
}
// Cancel the timeout if one was scheduled
ScheduledFuture<?> timeoutFuture = task.timeoutFuture;
if (timeoutFuture != null) {
timeoutFuture.cancel(false);
}
task.acquired = true;
--pendingAcquireCount;
++acquiredChannelCount;
super.acquire(task.promise);
}
// We should never have a negative value.
assert pendingAcquireCount >= 0;
assert acquiredChannelCount >= 0;
}
```
| [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java"
] | [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
index 5c08ac6260a..f23c5404eb0 100644
--- a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
+++ b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
@@ -271,6 +271,8 @@ private void runTaskQueue() {
timeoutFuture.cancel(false);
}
+ task.acquired();
+
--pendingAcquireCount;
++acquiredChannelCount;
@@ -289,7 +291,7 @@ private final class AcquireTask extends AcquireListener {
ScheduledFuture<?> timeoutFuture;
public AcquireTask(Promise<Channel> promise) {
- super(promise);
+ super(promise, false);
// We need to create a new promise as we need to ensure the AcquireListener runs in the correct
// EventLoop.
this.promise = executor.<Channel>newPromise().addListener(this);
@@ -322,9 +324,15 @@ public final void run() {
private class AcquireListener implements FutureListener<Channel> {
private final Promise<Channel> originalPromise;
+ protected boolean acquired;
AcquireListener(Promise<Channel> originalPromise) {
+ this(originalPromise, true);
+ }
+
+ protected AcquireListener(Promise<Channel> originalPromise, boolean acquired) {
this.originalPromise = originalPromise;
+ this.acquired = acquired;
}
@Override
@@ -334,11 +342,19 @@ public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
originalPromise.setSuccess(future.getNow());
} else {
- // Something went wrong try to run pending acquire tasks.
- decrementAndRunTaskQueue();
+ if (acquired) {
+ decrementAndRunTaskQueue();
+ } else {
+ runTaskQueue();
+ }
+
originalPromise.setFailure(future.cause());
}
}
+
+ public void acquired() {
+ acquired = true;
+ }
}
@Override
| null | train | train | 2015-07-21T18:37:55 | 2015-07-15T18:34:54Z | alexpark7712 | val |
netty/netty/4023_4029 | netty/netty | netty/netty/4023 | netty/netty/4029 | [
"timestamp(timedelta=1035.0, similarity=0.889983997942325)"
] | 348082c433e57d8faf53c8320e2c5fb01c53d69a | bcc6a40414b35703ba9da2a9b2226afee0bfb8ee | [
"Please review pull request: https://github.com/netty/netty/pull/4027\n",
"Will do tomorrow!\n\n> Am 26.07.2015 um 20:30 schrieb ioanbsu notifications@github.com:\n> \n> Please review pull request: #4027\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"Applied all the changes to satisfy #4027... | [
"channelWasNotAcuired -> channelWasNotAcquired (typo)\n",
"final\n",
"Shouldn't we call also \"channelClosePromise.setFailure(...)\" if not success full ?\n",
"You can use FutureListener\n",
"You can use FutureListener\n",
"Don't we have to execute this with `FixedChannelPool.executor` ?\n",
"Hm. should... | 2015-07-27T18:26:30Z | [
"defect"
] | FixedChannelPool does not properly counts the acquiredChannelCount | Steps to reproduce:
1) create FixedChannelPool with maxConnections=1
``` java
FixedChannelPool(bootstrap.remoteAddress(key.host(), key.port()),
new AbstractChannelPoolHandler() {... },1);
```
2) Request channel from FixedChannelPool
3) Make request through channel and once done release it
4) Kill the connection while channel is in pool.
5) Request channel from the pool. Since connection is not healthy anymore it will close the channel and right after that will try to acquire new channel. But acquiredChannelCount is never decremented.
So at the point when FixedChannelPool will try to acquire connection it will think that acquiredChannelCount==1 and it won't force new connection creation, instead it will put task to the pendingAcquireQueue, see lines 195 to 226 in FixedChannelPool.
Expected result:
When connection is died(becomes unhealthy) while being in pool it should recognize that in FixedChannlePool and decrement acquiredChannelCount before asking for new channel creation.
Actual result:
acquiredChannelCount stays unchanged so new channel never gets created/acquired.
Suggested solution:
in SimpleChannelPool add code:
``` java
private void onChannelUnhealthy(Promise<Void> channelClosePromise) {
channelClosePromise.addListener(new GenericFutureListener<Future<? super Void>>() {
@Override
public void operationComplete(Future<? super Void> future) throws Exception {
channelClosedCauseUnhealthy();
}
});
}
protected void channelClosedCauseUnhealthy() {
}
```
and modify notifyHealthCheck(...) to be as following:
``` java
private void notifyHealthCheck(Future<Boolean> future, Channel ch, Promise<Channel> promise) {
assert ch.eventLoop().inEventLoop();
if (future.isSuccess()) {
if (future.getNow() == Boolean.TRUE) {
try {
ch.attr(POOL_KEY).set(this);
handler.channelAcquired(ch);
promise.setSuccess(ch);
} catch (Throwable cause) {
Promise<Void> channelClosePromise = ch.eventLoop().<Void>newPromise();
onUnhealthyChannelClosed(channelClosePromise);
closeAndFail(ch, cause, promise,channelClosePromise);
}
} else {
Promise<Void> channelClosePromise = ch.eventLoop().<Void>newPromise();
onUnhealthyChannelClosed(channelClosePromise);
closeChannel(ch, channelClosePromise);
acquire(promise);
}
} else {
Promise<Void> channelClosePromise = ch.eventLoop().<Void>newPromise();
onUnhealthyChannelClosed(channelClosePromise);
closeChannel(ch, channelClosePromise);
acquire(promise);
}
}
```
closeChannel function needs to be modified as well:
``` java
private static void closeChannel(final Channel channel, final Promise channelClosePromise) {
channel.attr(POOL_KEY).getAndSet(null);
ChannelFuture future = channel.close();
if (channelClosePromise != null) {
future.addListener(new GenericFutureListener<Future<? super Void>>() {
@Override
public void operationComplete(Future<? super Void> future) throws Exception {
channelClosePromise.setSuccess(future.isSuccess());
}
});
}
}
private static void closeAndFail(Channel channel, Throwable cause, Promise<?> promise,
Promise<Void> channelClosePromise) {
closeChannel(channel, channelClosePromise);
promise.setFailure(cause);
}
```
in FixedChannelPool add method:
``` java
protected void channelClosedCauseUnhealthy() {
decrementAndRunTaskQueue();
}
```
| [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java",
"transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java"
] | [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java",
"transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
index f23c5404eb0..b4f7f17037b 100644
--- a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
+++ b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
@@ -245,6 +245,24 @@ public void operationComplete(Future<Void> future) throws Exception {
return p;
}
+ /**
+ * Called after channel failed to be acquired so we know to decrease the {@link #acquiredChannelCount} and pull the
+ * next pending task from {@link #pendingAcquireQueue}.
+ */
+ @Override
+ protected void channelClosedCauseUnhealthy() {
+ if (executor.inEventLoop()) {
+ decrementAndRunTaskQueue();
+ } else {
+ executor.execute(new Runnable() {
+ @Override
+ public void run() {
+ decrementAndRunTaskQueue();
+ }
+ });
+ }
+ }
+
private void decrementAndRunTaskQueue() {
--acquiredChannelCount;
@@ -342,9 +360,7 @@ public void operationComplete(Future<Channel> future) throws Exception {
if (future.isSuccess()) {
originalPromise.setSuccess(future.getNow());
} else {
- if (acquired) {
- decrementAndRunTaskQueue();
- } else {
+ if (!acquired) {
runTaskQueue();
}
diff --git a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java
index 8526bbc5ac5..402e6335441 100644
--- a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java
+++ b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java
@@ -24,6 +24,7 @@
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
+import io.netty.util.concurrent.GenericFutureListener;
import io.netty.util.concurrent.Promise;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.OneTimeTask;
@@ -50,6 +51,13 @@ public class SimpleChannelPool implements ChannelPool {
private final ChannelPoolHandler handler;
private final ChannelHealthChecker healthCheck;
private final Bootstrap bootstrap;
+ private final FutureListener<Void> channelWasNotAcquired = new FutureListener<Void>() {
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ channelClosedCauseUnhealthy();
+ }
+ };
+
/**
* Creates a new instance using the {@link ChannelHealthChecker#ACTIVE}.
@@ -161,18 +169,31 @@ private void notifyHealthCheck(Future<Boolean> future, Channel ch, Promise<Chann
handler.channelAcquired(ch);
promise.setSuccess(ch);
} catch (Throwable cause) {
- closeAndFail(ch, cause, promise);
+ Promise<Void> channelClosePromise = ch.eventLoop().newPromise();
+ channelClosePromise.addListener(channelWasNotAcquired);
+ closeAndFail(ch, cause, promise, channelClosePromise);
}
} else {
- closeChannel(ch);
+ Promise<Void> channelClosePromise = ch.eventLoop().newPromise();
+ channelClosePromise.addListener(channelWasNotAcquired);
+ closeChannel(ch, channelClosePromise);
acquire(promise);
}
} else {
- closeChannel(ch);
+ Promise<Void> channelClosePromise = ch.eventLoop().newPromise();
+ channelClosePromise.addListener(channelWasNotAcquired);
+ closeChannel(ch, channelClosePromise);
acquire(promise);
}
}
+ /**
+ * Called once channel failed to be acquired.
+ * This is useful for the cases when we need to perform cleanup operations after channel failed to be acquired.
+ */
+ protected void channelClosedCauseUnhealthy() {
+ }
+
/**
* Bootstrap a new {@link Channel}. The default implementation uses {@link Bootstrap#connect()},
* sub-classes may override this.
@@ -205,7 +226,7 @@ public void run() {
});
}
} catch (Throwable cause) {
- closeAndFail(channel, cause, promise);
+ closeAndFail(channel, cause, promise, null);
}
return promise;
}
@@ -218,28 +239,42 @@ private void doReleaseChannel(Channel channel, Promise<Void> promise) {
// Better include a stracktrace here as this is an user error.
new IllegalArgumentException(
"Channel " + channel + " was not acquired from this ChannelPool"),
- promise);
+ promise, null);
} else {
try {
if (offerChannel(channel)) {
handler.channelReleased(channel);
promise.setSuccess(null);
} else {
- closeAndFail(channel, FULL_EXCEPTION, promise);
+ closeAndFail(channel, FULL_EXCEPTION, promise, null);
}
} catch (Throwable cause) {
- closeAndFail(channel, cause, promise);
+ closeAndFail(channel, cause, promise, null);
}
}
}
- private static void closeChannel(Channel channel) {
+ private static void closeChannel(final Channel channel, final Promise<Void> channelClosePromise) {
channel.attr(POOL_KEY).getAndSet(null);
- channel.close();
+ ChannelFuture future = channel.close();
+ if (channelClosePromise != null) {
+ future.addListener(new FutureListener<Void>(){
+
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ if (future.isSuccess()) {
+ channelClosePromise.setSuccess(null);
+ } else{
+ channelClosePromise.setFailure(new IllegalStateException("Failed to close unhealthy channel."));
+ }
+ }
+ });
+ }
}
- private static void closeAndFail(Channel channel, Throwable cause, Promise<?> promise) {
- closeChannel(channel);
+ private static void closeAndFail(Channel channel, Throwable cause, Promise<?> promise,
+ Promise<Void> channelClosePromise) {
+ closeChannel(channel, channelClosePromise);
promise.setFailure(cause);
}
| null | train | train | 2015-07-27T15:58:50 | 2015-07-25T19:56:13Z | ioanbsu | val |
netty/netty/4031_4033 | netty/netty | netty/netty/4031 | netty/netty/4033 | [
"timestamp(timedelta=14.0, similarity=0.8902631432145431)"
] | 0f4d6c386efd4dfcae1082d6c095fe46bb9855fe | 5ae3392546b9820ddf5d10408f0004c0dd80bef0 | [] | [
"I think the old name was a better fit.\n",
"I got it, thanks.\n",
"change this to `assert acquiredChannelCount >= 0`\n"
] | 2015-07-28T16:45:06Z | [] | FixedChannelPool doesn't decrease acquiredChannelCount when timeout occurs | Netty version: 4.0.10.Beta5
Context:
When we use `FixedChannelPool` with `AcquireTimeoutAction.NEW`, if timeout occurs, `acquiredChannelCount` increases forever.
Steps to reproduce:
1. Create `FixedChannelPool` with `AcquireTimeoutAction.NEW`
2. Shutdown client network to simulate timeout
3. `acquiredChannelCount` increases forever
Opinion:
- When timeout occurs call `task.acquired()` before delegate to super
``` java
case NEW:
timeoutTask = new TimeoutTask() {
@Override
public void onTimeout(AcquireTask task) {
// Increment the acquire count and delegate to super to actually acquire a Channel which will
// create a new connetion.
++acquiredChannelCount;
task.acquired();
FixedChannelPool.super.acquire(task.promise);
}
};
break;
```
- If possible, encapsulate `++acquiredChnnelCount` in `AcquireListener.acquired` (I'll send a PR for this, #4033)
| [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java"
] | [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
index f23c5404eb0..bd89bd64612 100644
--- a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
+++ b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
@@ -117,7 +117,7 @@ public FixedChannelPool(Bootstrap bootstrap,
* be failed.
*/
public FixedChannelPool(Bootstrap bootstrap,
- ChannelPoolHandler handler,
+ ChannelPoolHandler handler,
ChannelHealthChecker healthCheck, AcquireTimeoutAction action,
final long acquireTimeoutMillis,
int maxConnections, int maxPendingAcquires) {
@@ -153,7 +153,7 @@ public void onTimeout(AcquireTask task) {
public void onTimeout(AcquireTask task) {
// Increment the acquire count and delegate to super to actually acquire a Channel which will
// create a new connetion.
- ++acquiredChannelCount;
+ task.acquired();
FixedChannelPool.super.acquire(task.promise);
}
@@ -191,14 +191,14 @@ private void acquire0(final Promise<Channel> promise) {
assert executor.inEventLoop();
if (acquiredChannelCount < maxConnections) {
- ++acquiredChannelCount;
-
- assert acquiredChannelCount > 0;
+ assert acquiredChannelCount >= 0;
// We need to create a new promise as we need to ensure the AcquireListener runs in the correct
// EventLoop
Promise<Channel> p = executor.newPromise();
- p.addListener(new AcquireListener(promise));
+ AcquireListener l = new AcquireListener(promise);
+ l.acquired();
+ p.addListener(l);
super.acquire(p);
} else {
if (pendingAcquireCount >= maxPendingAcquires) {
@@ -271,10 +271,8 @@ private void runTaskQueue() {
timeoutFuture.cancel(false);
}
- task.acquired();
-
--pendingAcquireCount;
- ++acquiredChannelCount;
+ task.acquired();
super.acquire(task.promise);
}
@@ -291,7 +289,7 @@ private final class AcquireTask extends AcquireListener {
ScheduledFuture<?> timeoutFuture;
public AcquireTask(Promise<Channel> promise) {
- super(promise, false);
+ super(promise);
// We need to create a new promise as we need to ensure the AcquireListener runs in the correct
// EventLoop.
this.promise = executor.<Channel>newPromise().addListener(this);
@@ -327,12 +325,7 @@ private class AcquireListener implements FutureListener<Channel> {
protected boolean acquired;
AcquireListener(Promise<Channel> originalPromise) {
- this(originalPromise, true);
- }
-
- protected AcquireListener(Promise<Channel> originalPromise, boolean acquired) {
this.originalPromise = originalPromise;
- this.acquired = acquired;
}
@Override
@@ -353,6 +346,10 @@ public void operationComplete(Future<Channel> future) throws Exception {
}
public void acquired() {
+ if (acquired) {
+ return;
+ }
+ acquiredChannelCount++;
acquired = true;
}
}
| null | test | train | 2015-07-29T18:37:42 | 2015-07-28T16:13:20Z | alexpark7712 | val |
netty/netty/4022_4034 | netty/netty | netty/netty/4022 | netty/netty/4034 | [
"timestamp(timedelta=14.0, similarity=0.8524517148954559)"
] | 148692705cf26a446263569e517566f635d6132b | b7b63391d669a6d41d2e80b8bc3741c5776c6688 | [
"@burtonator - Thanks for reaching out! Sounds like a useful feature. I'll put this on the todo list but if you get around to it before we do please feel free to submit a PR.\n",
"Thanks will check once i have free cycles\n\n> Am 25.07.2015 um 00:20 schrieb Kevin Burton notifications@github.com:\n> \n> It would b... | [] | 2015-07-28T19:39:39Z | [
"feature"
] | Implement IP_FREEBIND and Any-IP in native JNI base epoll | It would be really nice to have netty support IP_FREEBIND and Any-IP.
Essentially it allows you to bind() to any IP address, even if it's not associated with an Interface.
I was thinking this could be really helpful in ipv6 setups whereby you just setup your webserver on a subnet with 16k IPs and then when you want to deploy a new webapp, you just give it one of those IPs.
The problem is that your box wouldn't want to manually add every IP as your ifconfig would then list 16k IPs.
With Any-IP you can add a whole subnet in one command.
Then you can bind sockets to it..
The PROBLEM is I can't find out how to do this with Java . Using netty would allow you to build really awesome apps this way but I don't think you can bind sockets this way in Java.
The C programmers always seem to get the cool toys before we do!
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=ab79ad14a2d51e95f0ac3cef7cd116a57089ba82
IP_FREEBIND (since Linux 2.4)
If enabled, this boolean option allows binding to an IP
address that is nonlocal or does not (yet) exist. This
permits listening on a socket, without requiring the
underlying network interface or the specified dynamic IP
address to be up at the time that the application is trying to
bind to it. This option is the per-socket equivalent of the
ip_nonlocal_bind /proc interface described below.
... I may be looking at this in the next few weeks/months and if so will just implement it myself.
| [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannel... | [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannel... | [] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index a277d1695a5..480bf50a1c7 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -1248,6 +1248,10 @@ JNIEXPORT void Java_io_netty_channel_epoll_Native_setTcpKeepCnt(JNIEnv* env, jcl
setOption(env, fd, IPPROTO_TCP, TCP_KEEPCNT, &optval, sizeof(optval));
}
+JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_setIpFreeBind(JNIEnv* env, jclass clazz, jint fd, jint optval) {
+ setOption(env, fd, IPPROTO_IP, IP_FREEBIND, &optval, sizeof(optval));
+}
+
JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_isReuseAddresss(JNIEnv* env, jclass clazz, jint fd) {
int optval;
if (getOption(env, fd, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof(optval)) == -1) {
@@ -1364,6 +1368,14 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_getTcpKeepCnt(JNIEnv*
return optval;
}
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_isIpFreeBind(JNIEnv* env, jclass clazz, jint fd) {
+ int optval;
+ if (getOption(env, fd, IPPROTO_TCP, IP_FREEBIND, &optval, sizeof(optval)) == -1) {
+ return -1;
+ }
+ return optval;
+}
+
JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_tcpInfo0(JNIEnv* env, jclass clazz, jint fd, jintArray array) {
struct tcp_info tcp_info;
if (getOption(env, fd, IPPROTO_TCP, TCP_INFO, &tcp_info, sizeof(tcp_info)) == -1) {
diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
index 4708557ab11..380bd3b20f6 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
@@ -87,6 +87,7 @@ void Java_io_netty_channel_epoll_Native_setBroadcast(JNIEnv* env, jclass clazz,
void Java_io_netty_channel_epoll_Native_setTcpKeepIdle(JNIEnv* env, jclass clazz, jint fd, jint optval);
void Java_io_netty_channel_epoll_Native_setTcpKeepIntvl(JNIEnv* env, jclass clazz, jint fd, jint optval);
void Java_io_netty_channel_epoll_Native_setTcpKeepCnt(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setIpFreeBind(JNIEnv* env, jclass clazz, jint fd, jint optval);
jint Java_io_netty_channel_epoll_Native_isReuseAddresss(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_isReusePort(JNIEnv* env, jclass clazz, jint fd);
@@ -102,6 +103,7 @@ jint Java_io_netty_channel_epoll_Native_getTcpKeepIdle(JNIEnv* env, jclass clazz
jint Java_io_netty_channel_epoll_Native_getTcpKeepIntvl(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_getTcpKeepCnt(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_getSoError(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_isIpFreeBind(JNIEnv* env, jclass clazz, jint fd);
jstring Java_io_netty_channel_epoll_Native_kernelVersion(JNIEnv* env, jclass clazz);
jint Java_io_netty_channel_epoll_Native_iovMax(JNIEnv* env, jclass clazz);
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
index 268366f4012..4a3c5d19b55 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
@@ -26,6 +26,7 @@ public final class EpollChannelOption<T> extends ChannelOption<T> {
public static final ChannelOption<Integer> TCP_KEEPIDLE = valueOf("TCP_KEEPIDLE");
public static final ChannelOption<Integer> TCP_KEEPINTVL = valueOf("TCP_KEEPINTVL");
public static final ChannelOption<Integer> TCP_KEEPCNT = valueOf("TCP_KEEPCNT");
+ public static final ChannelOption<Boolean> IP_FREEBIND = valueOf("IP_FREEBIND");
public static final ChannelOption<DomainSocketReadMode> DOMAIN_SOCKET_READ_MODE =
valueOf("DOMAIN_SOCKET_READ_MODE");
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java
index c1ee1c0d521..e6888eb55e1 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannelConfig.java
@@ -37,7 +37,7 @@ public final class EpollServerSocketChannelConfig extends EpollServerChannelConf
@Override
public Map<ChannelOption<?>, Object> getOptions() {
- return getOptions(super.getOptions(), EpollChannelOption.SO_REUSEPORT);
+ return getOptions(super.getOptions(), EpollChannelOption.SO_REUSEPORT, EpollChannelOption.IP_FREEBIND);
}
@SuppressWarnings("unchecked")
@@ -46,6 +46,9 @@ public <T> T getOption(ChannelOption<T> option) {
if (option == EpollChannelOption.SO_REUSEPORT) {
return (T) Boolean.valueOf(isReusePort());
}
+ if (option == EpollChannelOption.IP_FREEBIND) {
+ return (T) Boolean.valueOf(isFreeBind());
+ }
return super.getOption(option);
}
@@ -55,6 +58,8 @@ public <T> boolean setOption(ChannelOption<T> option, T value) {
if (option == EpollChannelOption.SO_REUSEPORT) {
setReusePort((Boolean) value);
+ } else if (option == EpollChannelOption.IP_FREEBIND) {
+ setFreeBind((Boolean) value);
} else {
return super.setOption(option, value);
}
@@ -157,4 +162,21 @@ public EpollServerSocketChannelConfig setReusePort(boolean reusePort) {
Native.setReusePort(channel.fd().intValue(), reusePort ? 1 : 0);
return this;
}
+
+ /**
+ * Returns {@code true} if <a href="http://man7.org/linux/man-pages/man7/ip.7.html">IP_FREEBIND</a> is enabled,
+ * {@code false} otherwise.
+ */
+ public boolean isFreeBind() {
+ return Native.isIpFreeBind(channel.fd().intValue()) != 0;
+ }
+
+ /**
+ * If {@code true} is used <a href="http://man7.org/linux/man-pages/man7/ip.7.html">IP_FREEBIND</a> is enabled,
+ * {@code false} for disable it. Default is disabled.
+ */
+ public EpollServerSocketChannelConfig setFreeBind(boolean freeBind) {
+ Native.setIpFreeBind(channel.fd().intValue(), freeBind ? 1: 0);
+ return this;
+ }
}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index ede80dcef6c..492d3f86804 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -635,6 +635,7 @@ public static void shutdown(int fd, boolean read, boolean write) throws IOExcept
public static native int getTcpKeepIntvl(int fd);
public static native int getTcpKeepCnt(int fd);
public static native int getSoError(int fd);
+ public static native int isIpFreeBind(int fd);
public static native void setKeepAlive(int fd, int keepAlive);
public static native void setReceiveBufferSize(int fd, int receiveBufferSize);
@@ -650,7 +651,7 @@ public static void shutdown(int fd, boolean read, boolean write) throws IOExcept
public static native void setTcpKeepIdle(int fd, int seconds);
public static native void setTcpKeepIntvl(int fd, int seconds);
public static native void setTcpKeepCnt(int fd, int probes);
-
+ public static native void setIpFreeBind(int fd, int freeBind);
public static void tcpInfo(int fd, EpollTcpInfo info) {
tcpInfo0(fd, info.info);
}
| null | train | train | 2015-07-24T10:11:44 | 2015-07-24T22:20:29Z | burtonator | val |
netty/netty/4041_4042 | netty/netty | netty/netty/4041 | netty/netty/4042 | [
"timestamp(timedelta=78.0, similarity=0.9239012526455588)"
] | 94f65ed7ff214de3fc9680d80a5ec2d2fe2aaf48 | fd6091eda5783f9685bd5a850b349a86beb90576 | [
"We love contributions. Maybe care to submit a PR?\n\n> Am 29.07.2015 um 11:23 schrieb Jorge Castellanos Solaz notifications@github.com:\n> \n> Netty version: 4.1.0.Beta5\n> \n> Context:\n> I'm implementing an MQTT broker based on netty using the classes from the package io.netty.handler.codec.mqtt, in the version ... | [] | 2015-07-29T12:02:29Z | [
"improvement"
] | Add support for "session present" flag in io.netty.handler.codec.mqtt.MqttConnAckVariableHeader | Netty version: 4.1.0.Beta5
Context:
I'm implementing an MQTT broker based on netty using the classes from the package io.netty.handler.codec.mqtt, in the version 3.1.1 of the MQTT protocol it has been added a new flag in the variable header of the CONNACK message called "session present": http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718035, it would be great if that flag was supported in io.netty.handler.codec.mqtt.MqttConnAckVariableHeader.
| [
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttConnAckVariableHeader.java",
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java",
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttEncoder.java"
] | [
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttConnAckVariableHeader.java",
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java",
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttEncoder.java"
] | [
"codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java"
] | diff --git a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttConnAckVariableHeader.java b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttConnAckVariableHeader.java
index 7dfa3094837..a1b73303d1d 100644
--- a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttConnAckVariableHeader.java
+++ b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttConnAckVariableHeader.java
@@ -25,19 +25,25 @@ public class MqttConnAckVariableHeader {
private final MqttConnectReturnCode connectReturnCode;
- public MqttConnAckVariableHeader(MqttConnectReturnCode connectReturnCode) {
+ private final boolean sessionPresent;
+
+ public MqttConnAckVariableHeader(MqttConnectReturnCode connectReturnCode, boolean sessionPresent) {
this.connectReturnCode = connectReturnCode;
+ this.sessionPresent = sessionPresent;
}
public MqttConnectReturnCode connectReturnCode() {
return connectReturnCode;
}
+ public boolean sessionPresent() { return sessionPresent; }
+
@Override
public String toString() {
return new StringBuilder(StringUtil.simpleClassName(this))
.append('[')
.append("connectReturnCode=").append(connectReturnCode)
+ .append(", sessionPresent=").append(sessionPresent)
.append(']')
.toString();
}
diff --git a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java
index bcb770a354a..70f0a41e7ba 100644
--- a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java
+++ b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java
@@ -236,11 +236,11 @@ private static Result<MqttConnectVariableHeader> decodeConnectionVariableHeader(
}
private static Result<MqttConnAckVariableHeader> decodeConnAckVariableHeader(ByteBuf buffer) {
- buffer.readUnsignedByte(); // reserved byte
+ final boolean sessionPresent = (buffer.readUnsignedByte() & 0x01) == 0x01;
byte returnCode = buffer.readByte();
final int numberOfBytesConsumed = 2;
final MqttConnAckVariableHeader mqttConnAckVariableHeader =
- new MqttConnAckVariableHeader(MqttConnectReturnCode.valueOf(returnCode));
+ new MqttConnAckVariableHeader(MqttConnectReturnCode.valueOf(returnCode), sessionPresent);
return new Result<MqttConnAckVariableHeader>(mqttConnAckVariableHeader, numberOfBytesConsumed);
}
diff --git a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttEncoder.java b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttEncoder.java
index db2d871f9d9..eda2a8dde2c 100644
--- a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttEncoder.java
+++ b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttEncoder.java
@@ -192,7 +192,7 @@ private static ByteBuf encodeConnAckMessage(
ByteBuf buf = byteBufAllocator.buffer(4);
buf.writeByte(getFixedHeaderByte1(message.fixedHeader()));
buf.writeByte(2);
- buf.writeByte(0);
+ buf.writeByte(message.variableHeader().sessionPresent() ? 0x01 : 0x00);
buf.writeByte(message.variableHeader().connectReturnCode().byteValue());
return buf;
| diff --git a/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java b/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java
index f16602a64e6..705d071bfd8 100644
--- a/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java
+++ b/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java
@@ -289,7 +289,7 @@ private static MqttConnAckMessage createConnAckMessage() {
MqttFixedHeader mqttFixedHeader =
new MqttFixedHeader(MqttMessageType.CONNACK, false, MqttQoS.AT_MOST_ONCE, false, 0);
MqttConnAckVariableHeader mqttConnAckVariableHeader =
- new MqttConnAckVariableHeader(MqttConnectReturnCode.CONNECTION_ACCEPTED);
+ new MqttConnAckVariableHeader(MqttConnectReturnCode.CONNECTION_ACCEPTED, true);
return new MqttConnAckMessage(mqttFixedHeader, mqttConnAckVariableHeader);
}
| train | train | 2015-07-28T18:54:14 | 2015-07-29T09:23:08Z | jorcasso | val |
netty/netty/4030_4043 | netty/netty | netty/netty/4030 | netty/netty/4043 | [
"timestamp(timedelta=1386.0, similarity=0.9335526844307486)"
] | 94f65ed7ff214de3fc9680d80a5ec2d2fe2aaf48 | c9eff2faa93f72ccf4199b05ddee674753ba2c64 | [
"Can you also test with 4.1.0.Beta6-SNAPSHOT just to be sure it is not fixed already ?\n",
"I've just checked newest commit on branch 4.1 (94f65ed7ff214de3fc9680d80a5ec2d2fe2aaf48) and the problem still exists. \n\nThe problem is in line 115, below comment _'found </, decrementing openBracketsCount'_. IMHO additi... | [] | 2015-07-29T12:24:23Z | [
"defect"
] | XmlFrameDecoder is corrupt | Netty Version: 4.1.0.Beta4
Context:
When writing TCP client which exchanges XML messages with server I've encountered an error in XmlFrameDecoder. Given not full XML element, decoder will output not valid XML.
Steps to reproduce:
1. BUG - Decode invalid XML: `'<a><b/></'` - decoder will put it to output list and it will update readerIndex in input byteBuff
2. OK - Decode invalid XML: `'<a><b/>'` - decoder will not put it to output list nor it will update readerIndex in input byteBuff
3. OK - Decode valid XML: `'<a><b/></a>'` - decoder will put it to output list and it will update readerIndex in input byteBuff
Long story short:
XML decoder assumes that as soon as it finds `</` it can decrement opened brackets count. If closing bracket isn't in byteBuff this assumption will create a bug.
| [
"codec/src/main/java/io/netty/handler/codec/xml/XmlFrameDecoder.java"
] | [
"codec/src/main/java/io/netty/handler/codec/xml/XmlFrameDecoder.java"
] | [
"codec/src/test/java/io/netty/handler/codec/xml/XmlFrameDecoderTest.java"
] | diff --git a/codec/src/main/java/io/netty/handler/codec/xml/XmlFrameDecoder.java b/codec/src/main/java/io/netty/handler/codec/xml/XmlFrameDecoder.java
index c4f70974350..4a8b262bf8a 100644
--- a/codec/src/main/java/io/netty/handler/codec/xml/XmlFrameDecoder.java
+++ b/codec/src/main/java/io/netty/handler/codec/xml/XmlFrameDecoder.java
@@ -111,8 +111,16 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t
if (i < bufferLength - 1) {
final byte peekAheadByte = in.getByte(i + 1);
if (peekAheadByte == '/') {
- // found </, decrementing openBracketsCount
- openBracketsCount--;
+ // found </, we must check if it is enclosed
+ int peekFurtherAheadIndex = i + 2;
+ while (peekFurtherAheadIndex <= bufferLength - 1) {
+ //if we have </ and enclosing > we can decrement openBracketsCount
+ if (in.getByte(peekFurtherAheadIndex) == '>') {
+ openBracketsCount--;
+ break;
+ }
+ peekFurtherAheadIndex++;
+ }
} else if (isValidStartCharForXmlElement(peekAheadByte)) {
atLeastOneXmlElementFound = true;
// char after < is a valid xml element start char,
@@ -166,14 +174,15 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t
}
final int readerIndex = in.readerIndex();
+ int xmlElementLength = length - readerIndex;
- if (openBracketsCount == 0 && length > 0) {
- if (length >= bufferLength) {
- length = in.readableBytes();
+ if (openBracketsCount == 0 && xmlElementLength > 0) {
+ if (readerIndex + xmlElementLength >= bufferLength) {
+ xmlElementLength = in.readableBytes();
}
final ByteBuf frame =
- extractFrame(in, readerIndex + leadingWhiteSpaceCount, length - leadingWhiteSpaceCount);
- in.skipBytes(length);
+ extractFrame(in, readerIndex + leadingWhiteSpaceCount, xmlElementLength - leadingWhiteSpaceCount);
+ in.skipBytes(xmlElementLength);
out.add(frame);
}
}
| diff --git a/codec/src/test/java/io/netty/handler/codec/xml/XmlFrameDecoderTest.java b/codec/src/test/java/io/netty/handler/codec/xml/XmlFrameDecoderTest.java
index 62f6719fd7b..db3660accf8 100644
--- a/codec/src/test/java/io/netty/handler/codec/xml/XmlFrameDecoderTest.java
+++ b/codec/src/test/java/io/netty/handler/codec/xml/XmlFrameDecoderTest.java
@@ -101,6 +101,12 @@ public void testDecodeShortValidXmlWithLeadingWhitespace02AndTrailingGarbage() {
testDecodeWithXml(" \n\r \t<xxx/>\ttrash", "<xxx/>", CorruptedFrameException.class);
}
+ @Test
+ public void testDecodeInvalidXml() {
+ testDecodeWithXml("<a></", new Object[0]);
+ testDecodeWithXml("<a></a", new Object[0]);
+ }
+
@Test
public void testDecodeWithCDATABlock() {
final String xml = "<book>" +
@@ -119,18 +125,21 @@ public void testDecodeWithCDATABlockContainingNestedUnbalancedXml() {
}
@Test
- public void testDecodeWithTwoMessages() {
+ public void testDecodeWithMultipleMessages() {
final String input = "<root xmlns=\"http://www.acme.com/acme\" status=\"loginok\" " +
"timestamp=\"1362410583776\"/>\n\n" +
"<root xmlns=\"http://www.acme.com/acme\" status=\"start\" time=\"0\" " +
"timestamp=\"1362410584794\">\n<child active=\"1\" status=\"started\" id=\"935449\" " +
- "msgnr=\"2\"/>\n</root>";
+ "msgnr=\"2\"/>\n</root>" +
+ "<root xmlns=\"http://www.acme.com/acme\" status=\"logout\" timestamp=\"1362410584795\"/>";
final String frame1 = "<root xmlns=\"http://www.acme.com/acme\" status=\"loginok\" " +
"timestamp=\"1362410583776\"/>";
final String frame2 = "<root xmlns=\"http://www.acme.com/acme\" status=\"start\" time=\"0\" " +
"timestamp=\"1362410584794\">\n<child active=\"1\" status=\"started\" id=\"935449\" " +
"msgnr=\"2\"/>\n</root>";
- testDecodeWithXml(input, frame1, frame2);
+ final String frame3 = "<root xmlns=\"http://www.acme.com/acme\" status=\"logout\" " +
+ "timestamp=\"1362410584795\"/>";
+ testDecodeWithXml(input, frame1, frame2, frame3);
}
@Test
| train | train | 2015-07-28T18:54:14 | 2015-07-28T13:03:48Z | tomaszc | val |
netty/netty/3886_4045 | netty/netty | netty/netty/3886 | netty/netty/4045 | [
"timestamp(timedelta=32.0, similarity=0.9286218771515881)"
] | 595fb888398b3565c84c675e21f2aec8b8f0d0f9 | a91266c674029b5b0328dd53d40cead35f966ad6 | [
"Problematic jar example: https://search.maven.org/remotecontent?filepath=io/netty/netty-transport/5.0.0.Alpha2/netty-transport-5.0.0.Alpha2-sources.jar\n",
"@FeiWongReed We love contributions. Maybe submit a PR ?\n",
"Ok, will try.\n",
"It's easy to fix this, but it was not accidentally mistake. Look at #205... | [] | 2015-07-29T14:56:51Z | [
"defect"
] | OSGi manifests in javadocs/sources jars | This problem affected many people more than year ago, but it's still here: https://stackoverflow.com/questions/23149966/classnotfoundexception-for-a-type-that-is-available-to-the-osgi-runtime-io-net
You are including OSGi manifests in sources/javadocs jars, so osgi container treats them as correct dependencies when resolving from OBR repository. So runtime fails with non-descriptive ClassNotFoundException.
Fix it ASAP please.
| [
"pom.xml"
] | [
"pom.xml"
] | [] | diff --git a/pom.xml b/pom.xml
index f7ed52d1e1e..fd0e0e48ae8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1148,16 +1148,41 @@
</execution>
</executions>
</plugin>
+
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>build-helper-maven-plugin</artifactId>
+ <version>1.8</version>
+ <executions>
+ <execution>
+ <id>parse-version</id>
+ <goals>
+ <goal>parse-version</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+
<plugin>
<artifactId>maven-source-plugin</artifactId>
<version>2.2.1</version>
+ <!-- Eclipse-related OSGi manifests
+ See https://github.com/netty/netty/issues/3886
+ More information: http://rajakannappan.blogspot.ie/2010/03/automating-eclipse-source-bundle.html
+ -->
<configuration>
- <!--
- ~ Add generated MANIFEST.MF.
- ~ See https://github.com/netty/netty/issues/2058
- -->
- <useDefaultManifestFile>true</useDefaultManifestFile>
+ <archive>
+ <manifestEntries>
+ <Bundle-ManifestVersion>2</Bundle-ManifestVersion>
+ <Bundle-Name>${name}</Bundle-Name>
+ <Bundle-SymbolicName>${groupId}.${artifactId}.source</Bundle-SymbolicName>
+ <Bundle-Vendor>${organization.name}</Bundle-Vendor>
+ <Bundle-Version>${parsedVersion.osgiVersion}</Bundle-Version>
+ <Eclipse-SourceBundle>${groupId}.${artifactId};version="${parsedVersion.osgiVersion}";roots:="."</Eclipse-SourceBundle>
+ </manifestEntries>
+ </archive>
</configuration>
+
<executions>
<!--
~ This workaround prevents Maven from executing the 'generate-sources' phase twice.
| null | test | train | 2015-07-28T18:53:36 | 2015-06-14T11:58:15Z | pshirshov | val |
netty/netty/4071_4072 | netty/netty | netty/netty/4071 | netty/netty/4072 | [
"timestamp(timedelta=54.0, similarity=1.0000000000000002)"
] | fd5db7fa08d6392fdae23056481da39705da30aa | d9c455e1ca557fc41c96331cbd684df8adfe8633 | [] | [] | 2015-08-08T05:57:20Z | [] | MemoryRegionCache$Entry objects are not recycled | Netty Version: 4.0.30.Final
MemoryRegionCache$Entry instances are allocated from a Recycler but not recycled when done using them, leading to a lot of GCed objects.
| [
"buffer/src/main/java/io/netty/buffer/PoolThreadCache.java"
] | [
"buffer/src/main/java/io/netty/buffer/PoolThreadCache.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java b/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
index 3792c3e65ae..425f657ca96 100644
--- a/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
+++ b/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
@@ -386,7 +386,14 @@ protected abstract void initBuf(PoolChunk<T> chunk, long handle,
*/
@SuppressWarnings("unchecked")
public final boolean add(PoolChunk<T> chunk, long handle) {
- return queue.offer(newEntry(chunk, handle));
+ Entry<T> entry = newEntry(chunk, handle);
+ boolean queued = queue.offer(entry);
+ if (!queued) {
+ // If it was not possible to cache the chunk, immediately recycle the entry
+ entry.recycle();
+ }
+
+ return queued;
}
/**
@@ -398,6 +405,7 @@ public final boolean allocate(PooledByteBuf<T> buf, int reqCapacity) {
return false;
}
initBuf(entry.chunk, entry.handle, buf, reqCapacity);
+ entry.recycle();
// allocations is not thread-safe which is fine as this is only called from the same thread all time.
++ allocations;
| null | train | train | 2015-08-05T18:14:38 | 2015-08-08T05:54:58Z | merlimat | val |
netty/netty/4077_4078 | netty/netty | netty/netty/4077 | netty/netty/4078 | [
"timestamp(timedelta=37.0, similarity=0.8739797762840745)"
] | 2a7318ae0301d2541fd4da9f107731f15440e445 | 3151670b54cf0b98dbe3a953dab478b2ad2b550f | [
"@normanmaurer - I recall discussing this when we originally reviewed the pool. I think your rational was since we have to check the health anyways when we pull out of the pool, and the health check may be expensive (may include RTT), we wanted to minimize the number of health checks done. WDYT?\n",
"@Scottmitch... | [
"As discussed in https://github.com/netty/netty/issues/4077#issuecomment-130461684. Perhaps we should make this check configurable (some boolean flag to indicate we should check health when putting back into pool).\n",
"@Scottmitch sure, will do. What do you think the default behavior should be? Check health on r... | 2015-08-12T17:23:13Z | [
"feature"
] | SimpleChannelPool/FixedChannelPool: Don't put unhealthy channel back to the pool on release | Currently it is _required_ to release channel to the channel pool for `FixedChannelPool`.
Failure to release a channel may cause that `FixedChannelPool.acquiredChannelCount` becomes equal or bigger then `FixedChannelPool.maxConnections`. So basically it means that we have to release back all the channels even those ones that are not _healthy_, and when channel is released it simply put back into a `CimpleChannelPool.deque` regardless if it's healthy or not.
Now, on acquire SimpelChannelPool will check if channel is healthy before returning it. If it turns out to be unhealthy then it will discard it and poll the next one from the pool(if there any left in pool) and will continue doing so until it gets healthy channel or until the `deque` is empty(in which case it will just create a new channel). Though this sort of lazy-discard approach works, I think there is a room for improvement, such as we can filter out unhealthy channels at the time when we release them.
Here is example where such improvement may help:
1) Client application requested bunch of channels
2) Before releasing them back to pool all channels become closed cause remote tier closed them
3) We end up having channelPool full of unhealthy channels.
4) Next time we will try to acquire new channel we will have to cleanup/discard all unhealthy channels first which may take some time depending on number of unhealthy channels.
It also make sense to filter our unhealthy channels on release because in most cases when someone releases a channel he normally doesn't care about release performance. The more critical in my opinion is acquire performance.
| [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java",
"transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java"
] | [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java",
"transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java"
] | [
"transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
index bd89bd64612..1f57b7d640d 100644
--- a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
+++ b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
@@ -121,7 +121,34 @@ public FixedChannelPool(Bootstrap bootstrap,
ChannelHealthChecker healthCheck, AcquireTimeoutAction action,
final long acquireTimeoutMillis,
int maxConnections, int maxPendingAcquires) {
- super(bootstrap, handler, healthCheck);
+ this(bootstrap, handler, healthCheck, action, acquireTimeoutMillis, maxConnections, maxPendingAcquires, true);
+ }
+
+ /**
+ * Creates a new instance.
+ *
+ * @param bootstrap the {@link Bootstrap} that is used for connections
+ * @param handler the {@link ChannelPoolHandler} that will be notified for the different pool actions
+ * @param healthCheck the {@link ChannelHealthChecker} that will be used to check if a {@link Channel} is
+ * still healty when obtain from the {@link ChannelPool}
+ * @param action the {@link AcquireTimeoutAction} to use or {@code null} if non should be used.
+ * In this case {@param acquireTimeoutMillis} must be {@code -1}.
+ * @param acquireTimeoutMillis the time (in milliseconds) after which an pending acquire must complete or
+ * the {@link AcquireTimeoutAction} takes place.
+ * @param maxConnections the numnber of maximal active connections, once this is reached new tries to
+ * acquire a {@link Channel} will be delayed until a connection is returned to the
+ * pool again.
+ * @param maxPendingAcquires the maximum number of pending acquires. Once this is exceed acquire tries will
+ * be failed.
+ * @param releaseHealthCheck will check channel health before offering back if this parameter set to
+ * {@code true}.
+ */
+ public FixedChannelPool(Bootstrap bootstrap,
+ ChannelPoolHandler handler,
+ ChannelHealthChecker healthCheck, AcquireTimeoutAction action,
+ final long acquireTimeoutMillis,
+ int maxConnections, int maxPendingAcquires, final boolean releaseHealthCheck) {
+ super(bootstrap, handler, healthCheck, releaseHealthCheck);
if (maxConnections < 1) {
throw new IllegalArgumentException("maxConnections: " + maxConnections + " (expected: >= 1)");
}
diff --git a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java
index 311279a2292..72cd7cf9b89 100644
--- a/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java
+++ b/transport/src/main/java/io/netty/channel/pool/SimpleChannelPool.java
@@ -43,13 +43,17 @@
public class SimpleChannelPool implements ChannelPool {
private static final AttributeKey<SimpleChannelPool> POOL_KEY = AttributeKey.newInstance("channelPool");
private static final IllegalStateException FULL_EXCEPTION = new IllegalStateException("ChannelPool full");
+ private static final IllegalStateException UNHEALTHY_NON_OFFERED_TO_POOL =
+ new IllegalStateException("Channel is unhealthy not offering it back to pool");
static {
FULL_EXCEPTION.setStackTrace(EmptyArrays.EMPTY_STACK_TRACE);
+ UNHEALTHY_NON_OFFERED_TO_POOL.setStackTrace(EmptyArrays.EMPTY_STACK_TRACE);
}
private final Deque<Channel> deque = PlatformDependent.newConcurrentDeque();
private final ChannelPoolHandler handler;
private final ChannelHealthChecker healthCheck;
private final Bootstrap bootstrap;
+ private final boolean releaseHealthCheck;
/**
* Creates a new instance using the {@link ChannelHealthChecker#ACTIVE}.
@@ -67,11 +71,27 @@ public SimpleChannelPool(Bootstrap bootstrap, final ChannelPoolHandler handler)
* @param bootstrap the {@link Bootstrap} that is used for connections
* @param handler the {@link ChannelPoolHandler} that will be notified for the different pool actions
* @param healthCheck the {@link ChannelHealthChecker} that will be used to check if a {@link Channel} is
- * still healty when obtain from the {@link ChannelPool}
+ * still healthy when obtain from the {@link ChannelPool}
*/
public SimpleChannelPool(Bootstrap bootstrap, final ChannelPoolHandler handler, ChannelHealthChecker healthCheck) {
+ this(bootstrap, handler, healthCheck, true);
+ }
+
+ /**
+ * Creates a new instance.
+ *
+ * @param bootstrap the {@link Bootstrap} that is used for connections
+ * @param handler the {@link ChannelPoolHandler} that will be notified for the different pool actions
+ * @param healthCheck the {@link ChannelHealthChecker} that will be used to check if a {@link Channel} is
+ * still healthy when obtain from the {@link ChannelPool}
+ * @param releaseHealthCheck will offercheck channel health before offering back if this parameter set to
+ * {@code true}.
+ */
+ public SimpleChannelPool(Bootstrap bootstrap, final ChannelPoolHandler handler, ChannelHealthChecker healthCheck,
+ boolean releaseHealthCheck) {
this.handler = checkNotNull(handler, "handler");
this.healthCheck = checkNotNull(healthCheck, "healthCheck");
+ this.releaseHealthCheck = releaseHealthCheck;
// Clone the original Bootstrap as we want to set our own handler
this.bootstrap = checkNotNull(bootstrap, "bootstrap").clone();
this.bootstrap.handler(new ChannelInitializer<Channel>() {
@@ -183,9 +203,9 @@ private void notifyHealthCheck(Future<Boolean> future, Channel ch, Promise<Chann
}
/**
- * Bootstrap a new {@link Channel}. The default implementation uses {@link Bootstrap#connect()},
- * sub-classes may override this.
- *
+ * Bootstrap a new {@link Channel}. The default implementation uses {@link Bootstrap#connect()}, sub-classes may
+ * override this.
+ * <p>
* The {@link Bootstrap} that is passed in here is cloned via {@link Bootstrap#clone()}, so it is safe to modify.
*/
protected ChannelFuture connectChannel(Bootstrap bs) {
@@ -230,11 +250,10 @@ private void doReleaseChannel(Channel channel, Promise<Void> promise) {
promise);
} else {
try {
- if (offerChannel(channel)) {
- handler.channelReleased(channel);
- promise.setSuccess(null);
+ if (releaseHealthCheck) {
+ doHealthCheckOnRelease(channel, promise);
} else {
- closeAndFail(channel, FULL_EXCEPTION, promise);
+ releaseAndOffer(channel, promise);
}
} catch (Throwable cause) {
closeAndFail(channel, cause, promise);
@@ -242,6 +261,46 @@ private void doReleaseChannel(Channel channel, Promise<Void> promise) {
}
}
+ private void doHealthCheckOnRelease(final Channel channel, final Promise<Void> promise) throws Exception {
+ final Future<Boolean> f = healthCheck.isHealthy(channel);
+ if (f.isDone()) {
+ releaseAndOfferIfHealthy(channel, promise, f);
+ } else {
+ f.addListener(new FutureListener<Boolean>() {
+ @Override
+ public void operationComplete(Future<Boolean> future) throws Exception {
+ releaseAndOfferIfHealthy(channel, promise, f);
+ }
+ });
+ }
+ }
+
+ /**
+ * Adds the channel back to the pool only if the channel is healty.
+ * @param channel the channel to put back to the pool
+ * @param promise offer operation promise.
+ * @param future the future that contains information fif channel is healthy or not.
+ * @throws Exception in case when failed to notify handler about release operation.
+ */
+ private void releaseAndOfferIfHealthy(Channel channel, Promise<Void> promise, Future<Boolean> future)
+ throws Exception {
+ if (future.getNow()) { //channel turns out to be healthy, offering and releasing it.
+ releaseAndOffer(channel, promise);
+ } else { //channel ont healthy, just releasing it.
+ handler.channelReleased(channel);
+ closeAndFail(channel, UNHEALTHY_NON_OFFERED_TO_POOL, promise);
+ }
+ }
+
+ private void releaseAndOffer(Channel channel, Promise<Void> promise) throws Exception {
+ if (offerChannel(channel)) {
+ handler.channelReleased(channel);
+ promise.setSuccess(null);
+ } else {
+ closeAndFail(channel, FULL_EXCEPTION, promise);
+ }
+ }
+
private static void closeChannel(Channel channel) {
channel.attr(POOL_KEY).getAndSet(null);
channel.close();
| diff --git a/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java b/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java
index 20d024ddd88..20ca46b7837 100644
--- a/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java
+++ b/transport/src/test/java/io/netty/channel/pool/SimpleChannelPoolTest.java
@@ -25,7 +25,11 @@
import io.netty.channel.local.LocalAddress;
import io.netty.channel.local.LocalChannel;
import io.netty.channel.local.LocalServerChannel;
+import io.netty.util.concurrent.Future;
+import org.hamcrest.CoreMatchers;
+import org.junit.Rule;
import org.junit.Test;
+import org.junit.rules.ExpectedException;
import java.util.Queue;
import java.util.concurrent.LinkedBlockingQueue;
@@ -35,6 +39,9 @@
public class SimpleChannelPoolTest {
private static final String LOCAL_ADDR_ID = "test.id";
+ @Rule
+ public ExpectedException expectedException = ExpectedException.none();
+
@Test
public void testAcquire() throws Exception {
EventLoopGroup group = new DefaultEventLoopGroup();
@@ -142,4 +149,94 @@ protected boolean offerChannel(Channel ch) {
channel2.close().sync();
group.shutdownGracefully();
}
+
+ /**
+ * Tests that if channel was unhealthy it is not offered back to the pool.
+ *
+ * @throws Exception
+ */
+ @Test
+ public void testUnhealthyChannelIsNotOffered() throws Exception {
+ EventLoopGroup group = new DefaultEventLoopGroup();
+ LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
+ Bootstrap cb = new Bootstrap();
+ cb.remoteAddress(addr);
+ cb.group(group)
+ .channel(LocalChannel.class);
+
+ ServerBootstrap sb = new ServerBootstrap();
+ sb.group(group)
+ .channel(LocalServerChannel.class)
+ .childHandler(new ChannelInitializer<LocalChannel>() {
+ @Override
+ public void initChannel(LocalChannel ch) throws Exception {
+ ch.pipeline().addLast(new ChannelHandlerAdapter());
+ }
+ });
+
+ // Start server
+ Channel sc = sb.bind(addr).syncUninterruptibly().channel();
+ ChannelPoolHandler handler = new CountingChannelPoolHandler();
+ ChannelPool pool = new SimpleChannelPool(cb, handler);
+ Channel channel1 = pool.acquire().syncUninterruptibly().getNow();
+ pool.release(channel1).syncUninterruptibly();
+ Channel channel2 = pool.acquire().syncUninterruptibly().getNow();
+ //first check that when returned healthy then it actually offered back to the pool.
+ assertSame(channel1, channel2);
+
+ expectedException.expect(IllegalStateException.class);
+ channel1.close().syncUninterruptibly();
+ try {
+ pool.release(channel1).syncUninterruptibly();
+ } catch (Exception e) {
+ throw e;
+ } finally {
+ sc.close().syncUninterruptibly();
+ channel2.close().syncUninterruptibly();
+ group.shutdownGracefully();
+ }
+ }
+
+ /**
+ * Tests that if channel was unhealthy it is was offered back to the pool because
+ * it was requested not to validate channel health on release.
+ *
+ * @throws Exception
+ */
+ @Test
+ public void testUnhealthyChannelIsOfferedWhenNoHealthCheckRequested() throws Exception {
+ EventLoopGroup group = new DefaultEventLoopGroup();
+ LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
+ Bootstrap cb = new Bootstrap();
+ cb.remoteAddress(addr);
+ cb.group(group)
+ .channel(LocalChannel.class);
+
+ ServerBootstrap sb = new ServerBootstrap();
+ sb.group(group)
+ .channel(LocalServerChannel.class)
+ .childHandler(new ChannelInitializer<LocalChannel>() {
+ @Override
+ public void initChannel(LocalChannel ch) throws Exception {
+ ch.pipeline().addLast(new ChannelHandlerAdapter());
+ }
+ });
+
+ // Start server
+ Channel sc = sb.bind(addr).syncUninterruptibly().channel();
+ ChannelPoolHandler handler = new CountingChannelPoolHandler();
+ ChannelPool pool = new SimpleChannelPool(cb, handler, ChannelHealthChecker.ACTIVE, false);
+ Channel channel1 = pool.acquire().syncUninterruptibly().getNow();
+ channel1.close().syncUninterruptibly();
+ Future<Void> releaseFuture =
+ pool.release(channel1, channel1.eventLoop().<Void>newPromise()).syncUninterruptibly();
+ assertThat(releaseFuture.isSuccess(), CoreMatchers.is(true));
+
+ Channel channel2 = pool.acquire().syncUninterruptibly().getNow();
+ //verifying that in fact the channel2 is different that means is not pulled from the pool
+ assertNotSame(channel1, channel2);
+ sc.close().syncUninterruptibly();
+ channel2.close().syncUninterruptibly();
+ group.shutdownGracefully();
+ }
}
| train | train | 2015-08-17T19:12:10 | 2015-08-12T05:47:57Z | ioanbsu | val |
netty/netty/4085_4092 | netty/netty | netty/netty/4085 | netty/netty/4092 | [
"timestamp(timedelta=29.0, similarity=0.8904482250405097)"
] | f1a4454bf13487755395f30b3dcab16ab47b0bc6 | c0dd184ecdeac3689b4edbb78cfe9d92a7698fcb | [
"@yawkat can you tell me what you mean with \"abstract domain socket\" ?\n",
"See [this man page](http://man7.org/linux/man-pages/man7/unix.7.html). Basically, domain socket names allow `\\0` bytes in them, and on linux, a domain socket starting with `\\0` does not appear in the file system and is called abstract... | [
"Use : getBytes(CharsetUtil.UTF8)\n",
"Use : getBytes(CharsetUtil.UTF8)\n"
] | 2015-08-15T21:45:09Z | [
"feature"
] | Support abstract domain sockets using epoll on linux | Version: 5.0.0.Alpha2
Context:
I am trying to write a client for dbus on linux and need to support abstract domain sockets to properly connect to the native dbus servers.
It is currently not possible to create an abstract domain socket using the epoll selector. While creating normal domain sockets works fine, an attempt to prepend a `\0` to open an abstract socket leads to an IOException when binding or connecting since the `GetStringUTFChars` used in the [source](https://github.com/netty/netty/blob/master/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c#L1491) will encode the `\0` character. There could also be issues with the `strlen` and `strncpy` operations if this was moved to a different encode operation.
```
$ java -version
openjdk version "1.8.0_51"
OpenJDK Runtime Environment (build 1.8.0_51-b16)
OpenJDK 64-Bit Server VM (build 25.51-b03, mixed mode)
```
Operating System: Arch Linux 64-bit
```
$ uname -a
Linux ylt 4.1.4-1-ARCH #1 SMP PREEMPT Mon Aug 3 21:30:37 UTC 2015 x86_64 GNU/Linux
```
| [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java"
] | [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java"
] | [
"transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollAbstractDomainSocketEchoTest.java"
] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index 480bf50a1c7..4dc579ee9ef 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -1482,21 +1482,29 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_socketDomain(JNIEnv* e
return fd;
}
-JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_bindDomainSocket(JNIEnv* env, jclass clazz, jint fd, jstring socketPath) {
+// macro to calculate the length of a sockaddr_un struct for a given path length.
+// see sys/un.h#SUN_LEN, this is modified to allow nul bytes
+#define _UNIX_ADDR_LENGTH(path_len) (((struct sockaddr_un *) 0)->sun_path) + path_len
+
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_bindDomainSocket(JNIEnv* env, jclass clazz, jint fd, jbyteArray socketPath) {
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;
- const char* socket_path = (*env)->GetStringUTFChars(env, socketPath, 0);
- memcpy(addr.sun_path, socket_path, strlen(socket_path));
+ const jbyte* socket_path = (*env)->GetByteArrayElements(env, socketPath, 0);
+ jint socket_path_len = (*env)->GetArrayLength(env, socketPath);
+ if (socket_path_len > sizeof(addr.sun_path)) {
+ socket_path_len = sizeof(addr.sun_path);
+ }
+ memcpy(addr.sun_path, socket_path, socket_path_len);
if (unlink(socket_path) == -1 && errno != ENOENT) {
return -errno;
}
- int res = bind(fd, (struct sockaddr*) &addr, sizeof(addr));
- (*env)->ReleaseStringUTFChars(env, socketPath, socket_path);
+ int res = bind(fd, (struct sockaddr*) &addr, _UNIX_ADDR_LENGTH(socket_path_len));
+ (*env)->ReleaseByteArrayElements(env, socketPath, socket_path, 0);
if (res == -1) {
return -errno;
@@ -1504,22 +1512,27 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_bindDomainSocket(JNIEn
return res;
}
-JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_connectDomainSocket(JNIEnv* env, jclass clazz, jint fd, jstring socketPath) {
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_connectDomainSocket(JNIEnv* env, jclass clazz, jint fd, jbyteArray socketPath) {
struct sockaddr_un addr;
+ jint socket_path_len;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_UNIX;
- const char* socket_path = (*env)->GetStringUTFChars(env, socketPath, 0);
- strncpy(addr.sun_path, socket_path, sizeof(addr.sun_path) - 1);
+ const jbyte* socket_path = (*env)->GetByteArrayElements(env, socketPath, 0);
+ socket_path_len = (*env)->GetArrayLength(env, socketPath);
+ if (socket_path_len > sizeof(addr.sun_path)) {
+ socket_path_len = sizeof(addr.sun_path);
+ }
+ memcpy(addr.sun_path, socket_path, socket_path_len);
int res;
int err;
do {
- res = connect(fd, (struct sockaddr*) &addr, sizeof(addr));
+ res = connect(fd, (struct sockaddr*) &addr, _UNIX_ADDR_LENGTH(socket_path_len));
} while (res == -1 && ((err = errno) == EINTR));
- (*env)->ReleaseStringUTFChars(env, socketPath, socket_path);
+ (*env)->ReleaseByteArrayElements(env, socketPath, socket_path, 0);
if (res < 0) {
return -err;
@@ -1688,4 +1701,4 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_splice0(JNIEnv* env, j
JNIEXPORT jlong JNICALL Java_io_netty_channel_epoll_Native_ssizeMax(JNIEnv* env, jclass clazz) {
return SSIZE_MAX;
-}
\ No newline at end of file
+}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index 492d3f86804..4def6e42b0e 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -19,6 +19,7 @@
import io.netty.channel.ChannelException;
import io.netty.channel.DefaultFileRegion;
import io.netty.channel.unix.DomainSocketAddress;
+import io.netty.util.CharsetUtil;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.NativeLibraryLoader;
import io.netty.util.internal.PlatformDependent;
@@ -435,7 +436,7 @@ public static void bind(int fd, SocketAddress socketAddress) throws IOException
}
} else if (socketAddress instanceof DomainSocketAddress) {
DomainSocketAddress addr = (DomainSocketAddress) socketAddress;
- int res = bindDomainSocket(fd, addr.path());
+ int res = bindDomainSocket(fd, addr.path().getBytes(CharsetUtil.UTF_8));
if (res < 0) {
throw newIOException("bind", res);
}
@@ -445,7 +446,7 @@ public static void bind(int fd, SocketAddress socketAddress) throws IOException
}
private static native int bind(int fd, byte[] address, int scopeId, int port);
- private static native int bindDomainSocket(int fd, String path);
+ private static native int bindDomainSocket(int fd, byte[] path);
public static void listen(int fd, int backlog) throws IOException {
int res = listen0(fd, backlog);
@@ -464,7 +465,7 @@ public static boolean connect(int fd, SocketAddress socketAddress) throws IOExce
res = connect(fd, address.address, address.scopeId, inetSocketAddress.getPort());
} else if (socketAddress instanceof DomainSocketAddress) {
DomainSocketAddress unixDomainSocketAddress = (DomainSocketAddress) socketAddress;
- res = connectDomainSocket(fd, unixDomainSocketAddress.path());
+ res = connectDomainSocket(fd, unixDomainSocketAddress.path().getBytes(CharsetUtil.UTF_8));
} else {
throw new Error("Unexpected SocketAddress implementation " + socketAddress);
}
@@ -479,7 +480,7 @@ public static boolean connect(int fd, SocketAddress socketAddress) throws IOExce
}
private static native int connect(int fd, byte[] address, int scopeId, int port);
- private static native int connectDomainSocket(int fd, String path);
+ private static native int connectDomainSocket(int fd, byte[] path);
public static boolean finishConnect(int fd) throws IOException {
int res = finishConnect0(fd);
| diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollAbstractDomainSocketEchoTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollAbstractDomainSocketEchoTest.java
new file mode 100644
index 00000000000..ea0deb94a65
--- /dev/null
+++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollAbstractDomainSocketEchoTest.java
@@ -0,0 +1,28 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.channel.unix.DomainSocketAddress;
+import java.net.SocketAddress;
+import java.util.UUID;
+
+public class EpollAbstractDomainSocketEchoTest extends EpollDomainSocketEchoTest {
+ @Override
+ protected SocketAddress newSocketAddress() {
+ // these don't actually show up in the file system so creating a temp file isn't reliable
+ return new DomainSocketAddress("\0/tmp/" + UUID.randomUUID());
+ }
+}
| val | train | 2015-08-15T02:07:47 | 2015-08-12T22:09:14Z | yawkat | val |
netty/netty/4091_4102 | netty/netty | netty/netty/4091 | netty/netty/4102 | [
"timestamp(timedelta=9.0, similarity=0.8442024319863696)"
] | ad0b7ca56db391248c95ff2c0e24793cc3a4d9d5 | 380eee5e769abf607c40caaf6d6ba20f6247d6c7 | [
"@rkapsi thx will check\n",
"@normanmaurer - I got this one.\n",
"Interesting, if I run that test from the CLI it's eventually starting to spew these ones...\n\n```\njava.nio.channels.ClosedChannelException\n java.nio.channels.ClosedChannelException\n java.nio.channels.ClosedChannelException\n java.io.... | [] | 2015-08-19T20:03:22Z | [
"defect"
] | Epoll HttpServer ClosedChannelException death spiral | There appears to be something like a ClosedChannelException death spiral in conjunction with chunked HTTP POSTs and Epoll.
It starts if there is a ChannelHandler that throws an Exception and an another ChannelHandler that catches it and attempts to close the Channel. The server will then spin for a bit throwing ClosedChannelExceptions.
The same test using NIO doesn't exhibit this behavior.
```
Netty: 4.1.0.Beta6-SNAPSHOT
$ java -version
java version "1.8.0_40"
Java(TM) SE Runtime Environment (build 1.8.0_40-b25)
Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode)
$ uname -a
Linux ardverk 4.0.7-2-ARCH #1 SMP PREEMPT Tue Jun 30 07:50:21 UTC 2015 x86_64 GNU/Linux
```
```
public class ChunkedPostWithEpollTest {
private static final boolean USE_EPOLL = true;
@Test
public void httpChunkedPostWithEpollServer() throws Exception {
Channel server = newServer(8080);
try (CloseableHttpClient client = HttpClientBuilder.create().build()) {
HttpPost post = new HttpPost("http://localhost:8080");
post.setEntity(createChunkedOfSize(1));
client.execute(post);
} finally {
server.close();
}
}
private static Channel newServer(int port) throws Exception {
ServerBootstrap serverBootstrap = new ServerBootstrap();
EventLoopGroup acceptorGroup = USE_EPOLL
? new EpollEventLoopGroup()
: new NioEventLoopGroup();
EventLoopGroup workerGroup = USE_EPOLL
? new EpollEventLoopGroup()
: new NioEventLoopGroup();
return serverBootstrap.group(acceptorGroup, workerGroup)
.channel(USE_EPOLL
? EpollServerSocketChannel.class
: NioServerSocketChannel.class)
.childHandler(new MyInitializer())
.bind(8080)
.sync()
.channel();
}
private static HttpEntity createChunkedOfSize(int mb) {
int bytes = mb * 1024 * 1024;
byte[] buffer = new byte[bytes];
ByteArrayEntity entity = new ByteArrayEntity(buffer, ContentType.APPLICATION_OCTET_STREAM);
entity.setChunked(true);
return entity;
}
@Sharable
private static class MyInitializer extends ChannelInitializer<Channel> {
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new HttpServerCodec());
pipeline.addLast(new BuggyChannelHandler());
pipeline.addLast(new ExceptionHandler());
}
}
@Sharable
private static class BuggyChannelHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
throw new NullPointerException("I am a bug!");
}
}
@Sharable
private static class ExceptionHandler extends ChannelInboundHandlerAdapter {
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
}
}
```
```
[main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 24
[main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 9147 (auto-detected)
[main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 40:6c:8f:ff:fe:b9:25:55 (auto-detected)
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 24
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 24
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
[main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
[main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
[epollEventLoopGroup-3-1] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetectionLevel: simple
[epollEventLoopGroup-3-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacity: 262144
java.lang.NullPointerException: I am a bug!
at ChunkedPostWithEpollTest$BuggyChannelHandler.channelRead(ChunkedPostWithEpollTest.java:80)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:157)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:157)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:946)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:345)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:253)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:699)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
java.lang.NullPointerException: I am a bug!
at ChunkedPostWithEpollTest$BuggyChannelHandler.channelRead(ChunkedPostWithEpollTest.java:80)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:157)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:157)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:946)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:345)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:253)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:699)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
```
| [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollServerChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java",
"transport-native-epoll/src/mai... | [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollServerChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java",
"transport-native-epoll/src/mai... | [
"transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java"
] | diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java
index d3faad8d2a1..491e79dcaf8 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java
@@ -311,6 +311,23 @@ protected abstract class AbstractEpollUnsafe extends AbstractUnsafe {
*/
abstract void epollInReady();
+ /**
+ * Will schedule a {@link #epollInReady()} call on the event loop if necessary.
+ * @param edgeTriggered {@code true} if the channel is using ET mode. {@code false} otherwise.
+ */
+ final void checkResetEpollIn(boolean edgeTriggered) {
+ if (edgeTriggered && !isInputShutdown0()) {
+ // trigger a read again as there may be something left to read and because of epoll ET we
+ // will not get notified again until we read everything from the socket
+ eventLoop().execute(new OneTimeTask() {
+ @Override
+ public void run() {
+ epollInReady();
+ }
+ });
+ }
+ }
+
/**
* Called once EPOLLRDHUP event is ready to be processed
*/
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollServerChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollServerChannel.java
index eaeded0d16a..d277a441581 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollServerChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollServerChannel.java
@@ -73,7 +73,7 @@ protected Object filterOutboundMessage(Object msg) throws Exception {
abstract Channel newChildChannel(int fd, byte[] remote, int offset, int len) throws Exception;
final class EpollServerSocketUnsafe extends AbstractEpollUnsafe {
- // Will hold the remote address after accept(...) was sucesssful.
+ // Will hold the remote address after accept(...) was successful.
// We need 24 bytes for the address as maximum + 1 byte for storing the length.
// So use 26 bytes as it's a power of two.
private final byte[] acceptedAddress = new byte[26];
@@ -117,16 +117,8 @@ void epollInReady() {
readPending = false;
allocHandle.incMessagesRead(1);
- try {
- int len = acceptedAddress[0];
- pipeline.fireChannelRead(newChildChannel(socketFd, acceptedAddress, 1, len));
- } catch (Throwable t) {
- if (edgeTriggered) { // We must keep reading if ET is enabled
- pipeline.fireExceptionCaught(t);
- } else {
- throw t;
- }
- }
+ int len = acceptedAddress[0];
+ pipeline.fireChannelRead(newChildChannel(socketFd, acceptedAddress, 1, len));
} while (allocHandle.continueReading());
} catch (Throwable t) {
exception = t;
@@ -136,6 +128,7 @@ void epollInReady() {
if (exception != null) {
pipeline.fireExceptionCaught(exception);
+ checkResetEpollIn(edgeTriggered);
}
} finally {
// Check if there is a readPending which was not processed yet.
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
index 1747fb2305c..50fb2a5df4f 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
@@ -587,7 +587,7 @@ private void safeClosePipe(int pipe) {
}
class EpollStreamUnsafe extends AbstractEpollUnsafe {
- private boolean handleReadException(ChannelPipeline pipeline, ByteBuf byteBuf, Throwable cause, boolean close) {
+ private void handleReadException(ChannelPipeline pipeline, ByteBuf byteBuf, Throwable cause, boolean close) {
if (byteBuf != null) {
if (byteBuf.isReadable()) {
readPending = false;
@@ -601,9 +601,7 @@ private boolean handleReadException(ChannelPipeline pipeline, ByteBuf byteBuf, T
pipeline.fireExceptionCaught(cause);
if (close || cause instanceof IOException) {
shutdownInput();
- return true;
}
- return false;
}
@Override
@@ -769,48 +767,35 @@ void epollInReady() {
boolean close = false;
try {
do {
- try {
- SpliceInTask spliceTask = spliceQueue.peek();
- if (spliceTask != null) {
- if (spliceTask.spliceIn(allocHandle)) {
- // We need to check if it is still active as if not we removed all SpliceTasks in
- // doClose(...)
- if (isActive()) {
- spliceQueue.remove();
- }
- continue;
- } else {
- break;
+ SpliceInTask spliceTask = spliceQueue.peek();
+ if (spliceTask != null) {
+ if (spliceTask.spliceIn(allocHandle)) {
+ // We need to check if it is still active as if not we removed all SpliceTasks in
+ // doClose(...)
+ if (isActive()) {
+ spliceQueue.remove();
}
- }
-
- // we use a direct buffer here as the native implementations only be able
- // to handle direct buffers.
- byteBuf = allocHandle.allocate(allocator);
- allocHandle.lastBytesRead(doReadBytes(byteBuf));
- if (allocHandle.lastBytesRead() <= 0) {
- // nothing was read, release the buffer.
- byteBuf.release();
- byteBuf = null;
- close = allocHandle.lastBytesRead() < 0;
+ continue;
+ } else {
break;
}
- readPending = false;
- allocHandle.incMessagesRead(1);
- pipeline.fireChannelRead(byteBuf);
+ }
+
+ // we use a direct buffer here as the native implementations only be able
+ // to handle direct buffers.
+ byteBuf = allocHandle.allocate(allocator);
+ allocHandle.lastBytesRead(doReadBytes(byteBuf));
+ if (allocHandle.lastBytesRead() <= 0) {
+ // nothing was read, release the buffer.
+ byteBuf.release();
byteBuf = null;
- } catch (Throwable t) {
- if (edgeTriggered) { // We must keep reading if ET is enabled
- if (byteBuf != null) {
- byteBuf.release();
- byteBuf = null;
- }
- pipeline.fireExceptionCaught(t);
- } else {
- // byteBuf is release in outer exception handling if necessary.
- throw t;
- }
+ close = allocHandle.lastBytesRead() < 0;
+ break;
}
+ readPending = false;
+ allocHandle.incMessagesRead(1);
+ pipeline.fireChannelRead(byteBuf);
+ byteBuf = null;
} while (allocHandle.continueReading());
allocHandle.readComplete();
@@ -821,17 +806,8 @@ void epollInReady() {
close = false;
}
} catch (Throwable t) {
- boolean closed = handleReadException(pipeline, byteBuf, t, close);
- if (!closed) {
- // trigger a read again as there may be something left to read and because of epoll ET we
- // will not get notified again until we read everything from the socket
- eventLoop().execute(new OneTimeTask() {
- @Override
- public void run() {
- epollInReady();
- }
- });
- }
+ handleReadException(pipeline, byteBuf, t, close);
+ checkResetEpollIn(edgeTriggered);
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
index 3681024e3a7..a9c5e976cf4 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
@@ -535,9 +535,9 @@ void epollInReady() {
Throwable exception = null;
try {
- do {
- ByteBuf data = null;
- try {
+ ByteBuf data = null;
+ try {
+ do {
data = allocHandle.allocate(allocator);
allocHandle.attemptedBytesRead(data.writableBytes());
final DatagramSocketAddress remoteAddress;
@@ -564,21 +564,14 @@ void epollInReady() {
readBuf.add(new DatagramPacket(data, (InetSocketAddress) localAddress(), remoteAddress));
data = null;
- } catch (Throwable t) {
- if (data != null) {
- data.release();
- data = null;
- }
- if (edgeTriggered) {
- // We do not break from the loop here and remember the last exception,
- // because we need to consume everything from the socket used with epoll ET.
- pipeline.fireExceptionCaught(t);
- } else {
- exception = t;
- break;
- }
+ } while (allocHandle.continueReading());
+ } catch (Throwable t) {
+ if (data != null) {
+ data.release();
+ data = null;
}
- } while (allocHandle.continueReading());
+ exception = t;
+ }
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
@@ -590,6 +583,7 @@ void epollInReady() {
if (exception != null) {
pipeline.fireExceptionCaught(exception);
+ checkResetEpollIn(edgeTriggered);
}
} finally {
// Check if there is a readPending which was not processed yet.
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDomainSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDomainSocketChannel.java
index fb94e2a85bb..63c130dc03e 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDomainSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDomainSocketChannel.java
@@ -157,16 +157,7 @@ private void epollInReadFd() {
readPending = false;
allocHandle.incMessagesRead(1);
- try {
- pipeline.fireChannelRead(new FileDescriptor(socketFd));
- } catch (Throwable t) {
- // If ET is enabled we need to consume everything from the socket
- if (edgeTriggered) {
- pipeline.fireExceptionCaught(t);
- } else {
- throw t;
- }
- }
+ pipeline.fireChannelRead(new FileDescriptor(socketFd));
} while (allocHandle.continueReading());
allocHandle.readComplete();
@@ -175,14 +166,7 @@ private void epollInReadFd() {
allocHandle.readComplete();
pipeline.fireChannelReadComplete();
pipeline.fireExceptionCaught(t);
- // trigger a read again as there may be something left to read and because of epoll ET we
- // will not get notified again until we read everything from the socket
- eventLoop().execute(new OneTimeTask() {
- @Override
- public void run() {
- epollInReady();
- }
- });
+ checkResetEpollIn(edgeTriggered);
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
| diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java
index 9637006217e..c13ac8f7ee6 100644
--- a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java
+++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java
@@ -16,12 +16,27 @@
package io.netty.channel.epoll;
import io.netty.bootstrap.Bootstrap;
+import io.netty.bootstrap.ServerBootstrap;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
+import io.netty.channel.ChannelInitializer;
+import io.netty.channel.ChannelOption;
+import io.netty.channel.ChannelPipeline;
import io.netty.channel.EventLoopGroup;
+import io.netty.channel.ServerChannel;
import org.junit.Assert;
import org.junit.Test;
import java.net.InetSocketAddress;
+import java.net.SocketAddress;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
public class EpollSocketChannelTest {
@@ -98,4 +113,97 @@ private static void assertTcpInfo0(EpollTcpInfo info) throws Exception {
Assert.assertTrue(info.rcvSpace() >= 0);
Assert.assertTrue(info.totalRetrans() >= 0);
}
+
+ @Test
+ public void testExceptionHandlingDoesNotInfiniteLoop() throws InterruptedException {
+ EventLoopGroup group = new EpollEventLoopGroup();
+ try {
+ runExceptionHandleFeedbackLoop(group, EpollServerSocketChannel.class, EpollSocketChannel.class,
+ new InetSocketAddress(0));
+ runExceptionHandleFeedbackLoop(group, EpollServerDomainSocketChannel.class, EpollDomainSocketChannel.class,
+ EpollSocketTestPermutation.newSocketAddress());
+ } finally {
+ group.shutdownGracefully();
+ }
+ }
+
+ private void runExceptionHandleFeedbackLoop(EventLoopGroup group, Class<? extends ServerChannel> serverChannelClass,
+ Class<? extends Channel> channelClass, SocketAddress bindAddr) throws InterruptedException {
+ Channel serverChannel = null;
+ Channel clientChannel = null;
+ try {
+ MyInitializer serverInitializer = new MyInitializer();
+ ServerBootstrap sb = new ServerBootstrap();
+ sb.option(ChannelOption.SO_BACKLOG, 1024);
+ sb.group(group)
+ .channel(serverChannelClass)
+ .childHandler(serverInitializer);
+
+ serverChannel = sb.bind(bindAddr).syncUninterruptibly().channel();
+
+ Bootstrap b = new Bootstrap();
+ b.group(group);
+ b.channel(channelClass);
+ b.option(ChannelOption.SO_KEEPALIVE, true);
+ b.remoteAddress(serverChannel.localAddress());
+ b.handler(new MyInitializer());
+ clientChannel = b.connect().syncUninterruptibly().channel();
+
+ clientChannel.writeAndFlush(Unpooled.wrappedBuffer(new byte[1024]));
+
+ // We expect to get 2 exceptions (1 from BuggyChannelHandler and 1 from ExceptionHandler).
+ assertTrue(serverInitializer.exceptionHandler.latch1.await(2, TimeUnit.SECONDS));
+
+ // After we get the first exception, we should get no more, this is expected to timeout.
+ assertFalse("Encountered " + serverInitializer.exceptionHandler.count.get() +
+ " exceptions when 1 was expected",
+ serverInitializer.exceptionHandler.latch2.await(2, TimeUnit.SECONDS));
+ } finally {
+ if (serverChannel != null) {
+ serverChannel.close().syncUninterruptibly();
+ }
+ if (clientChannel != null) {
+ clientChannel.close().syncUninterruptibly();
+ }
+ }
+ }
+
+ private static class MyInitializer extends ChannelInitializer<Channel> {
+ final ExceptionHandler exceptionHandler = new ExceptionHandler();
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ChannelPipeline pipeline = ch.pipeline();
+
+ pipeline.addLast(new BuggyChannelHandler());
+ pipeline.addLast(exceptionHandler);
+ }
+ }
+
+ private static class BuggyChannelHandler extends ChannelInboundHandlerAdapter {
+ @Override
+ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
+ throw new NullPointerException("I am a bug!");
+ }
+ }
+
+ private static class ExceptionHandler extends ChannelInboundHandlerAdapter {
+ final AtomicLong count = new AtomicLong();
+ /**
+ * We expect to get 2 calls to {@link #exceptionCaught(ChannelHandlerContext, Throwable)}.
+ * 1 call from BuggyChannelHandler and 1 from closing the channel in this class.
+ */
+ final CountDownLatch latch1 = new CountDownLatch(2);
+ final CountDownLatch latch2 = new CountDownLatch(1);
+
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ if (count.incrementAndGet() <= 2) {
+ latch1.countDown();
+ } else {
+ latch2.countDown();
+ }
+ // This is expected to throw an exception!
+ ctx.close();
+ }
+ }
}
| train | train | 2015-08-18T11:44:51 | 2015-08-14T20:11:34Z | rkapsi | val |
netty/netty/4020_4119 | netty/netty | netty/netty/4020 | netty/netty/4119 | [
"timestamp(timedelta=126215.0, similarity=0.8547458327797111)"
] | bd5091a14f6b10837900a0f80dcbad4144a557f3 | a3cdff520d15c6dc40d08d1f0ac45aa396cefc7d | [
"The only solution to this bug would be to dispose the ChannelGroup, so once a CG is closed, it remains closed and every new add()-invocation results in closing the channel.\n\nThe only other solution is to synchronize on the CG which would be bad.\n\nEDIT: Is there nobody who can verify if I'm right or wrong ?\n",... | [] | 2015-08-21T13:59:46Z | [
"defect"
] | ChannelGroup has race-condition | Hi,
the ChannelGroup doc says:
``` java
* If both {@link ServerChannel}s and non-{@link ServerChannel}s exist in the
* same {@link ChannelGroup}, any requested I/O operations on the group are
* performed for the {@link ServerChannel}s first and then for the others.
* <p>
* This rule is very useful when you shut down a server in one shot:
*
* <pre>
* <strong>{@link ChannelGroup} allChannels =
* new {@link DefaultChannelGroup}({@link GlobalEventExecutor}.INSTANCE);</strong>
*
* public static void main(String[] args) throws Exception {
* {@link ServerBootstrap} b = new {@link ServerBootstrap}(..);
* ...
* b.childHandler(new MyHandler());
*
* // Start the server
* b.getPipeline().addLast("handler", new MyHandler());
* {@link Channel} serverChannel = b.bind(..).sync();
* <strong>allChannels.add(serverChannel);</strong>
*
* ... Wait until the shutdown signal reception ...
*
* // Close the serverChannel and then all accepted connections.
* <strong>allChannels.close().awaitUninterruptibly();</strong>
* }
*
* public class MyHandler extends {@link ChannelHandlerAdapter} {
* {@code @Override}
* public void channelActive({@link ChannelHandlerContext} ctx) {
* // closed on shutdown.
* <strong>allChannels.add(ctx.channel());</strong>
* super.channelActive(ctx);
* }
* }
```
But, this is _not_ thread-safe in my opinion, you could leak client channels:
``` java
public class TestCloseMain {
static volatile Channel ch;
public static void main(String[] args) throws IOException {
EventLoopGroup eventLoopGroup = new NioEventLoopGroup(1);
ChannelGroup cg = new DefaultChannelGroup(eventLoopGroup.next());
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(eventLoopGroup)
.channel(NioServerSocketChannel.class)
.handler(new ChannelHandlerAdapter() {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println(msg.getClass() + " -> " + msg);
cg.close(); // <- If this happen concurrently = BOOM!
super.channelRead(ctx, msg);
}
})
.childOption(ChannelOption.TCP_NODELAY, true)
.childHandler(new ChannelInitializer<Channel>() {
@Override
protected void initChannel(Channel ch) throws Exception {
ch.pipeline().addLast(new ChannelHandlerAdapter() {
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
cg.add(ctx.channel());
System.out.println("Still active! " + ctx.channel().parent().isOpen());
super.channelActive(ctx);
}
});
}
});
cg.add(serverBootstrap.bind(1337).syncUninterruptibly().channel());
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoopGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.handler(new ChannelInitializer<Channel>() {
@Override
protected void initChannel(Channel ch) throws Exception {
}
});
ch = bootstrap.connect("localhost", 1337).syncUninterruptibly().channel();
System.in.read();
ch.close().awaitUninterruptibly();
}
}
```
I know, its only using 1 thread, but it demonstrates the race condition:
``` java
cg.close(); // <- If this happen concurrently = BOOM!
```
If the channel group is closed, after the client channel has been registered, the client channel is not closed properly.
Do I miss something ? This sounds quite severe for me.
| [
"transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java"
] | [
"transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java b/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java
index 34029f7a3f9..a16938c7679 100644
--- a/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java
+++ b/transport/src/main/java/io/netty/channel/group/DefaultChannelGroup.java
@@ -52,13 +52,15 @@ public void operationComplete(ChannelFuture future) throws Exception {
remove(future.channel());
}
};
+ private final boolean stayClosed;
+ private volatile boolean closed;
/**
* Creates a new group with a generated name and the provided {@link EventExecutor} to notify the
* {@link ChannelGroupFuture}s.
*/
public DefaultChannelGroup(EventExecutor executor) {
- this("group-0x" + Integer.toHexString(nextId.incrementAndGet()), executor);
+ this(executor, false);
}
/**
@@ -67,11 +69,33 @@ public DefaultChannelGroup(EventExecutor executor) {
* duplicate check is done against group names.
*/
public DefaultChannelGroup(String name, EventExecutor executor) {
+ this(name, executor, false);
+ }
+
+ /**
+ * Creates a new group with a generated name and the provided {@link EventExecutor} to notify the
+ * {@link ChannelGroupFuture}s. {@code stayClosed} defines whether or not, this group can be closed
+ * more than once. Adding channels to a closed group will immediately close them, too. This makes it
+ * easy, to shutdown server and child channels at once.
+ */
+ public DefaultChannelGroup(EventExecutor executor, boolean stayClosed) {
+ this("group-0x" + Integer.toHexString(nextId.incrementAndGet()), executor, stayClosed);
+ }
+
+ /**
+ * Creates a new group with the specified {@code name} and {@link EventExecutor} to notify the
+ * {@link ChannelGroupFuture}s. {@code stayClosed} defines whether or not, this group can be closed
+ * more than once. Adding channels to a closed group will immediately close them, too. This makes it
+ * easy, to shutdown server and child channels at once. Please note that different groups can have
+ * the same name, which means no duplicate check is done against group names.
+ */
+ public DefaultChannelGroup(String name, EventExecutor executor, boolean stayClosed) {
if (name == null) {
throw new NullPointerException("name");
}
this.name = name;
this.executor = executor;
+ this.stayClosed = stayClosed;
}
@Override
@@ -122,6 +146,23 @@ public boolean add(Channel channel) {
if (added) {
channel.closeFuture().addListener(remover);
}
+
+ if (stayClosed && closed) {
+
+ // First add channel, than check if closed.
+ // Seems inefficient at first, but this way a volatile
+ // gives us enough synchronization to be thread-safe.
+ //
+ // If true: Close right away.
+ // (Might be closed a second time by ChannelGroup.close(), but this is ok)
+ //
+ // If false: Channel will definitely be closed by the ChannelGroup.
+ // (Because closed=true always happens-before ChannelGroup.close())
+ //
+ // See https://github.com/netty/netty/issues/4020
+ channel.close();
+ }
+
return added;
}
@@ -273,6 +314,16 @@ public ChannelGroupFuture close(ChannelMatcher matcher) {
Map<Channel, ChannelFuture> futures =
new LinkedHashMap<Channel, ChannelFuture>(size());
+ if (stayClosed) {
+ // It is important to set the closed to true, before closing channels.
+ // Our invariants are:
+ // closed=true happens-before ChannelGroup.close()
+ // ChannelGroup.add() happens-before checking closed==true
+ //
+ // See https://github.com/netty/netty/issues/4020
+ closed = true;
+ }
+
for (Channel c: serverChannels.values()) {
if (matcher.matches(c)) {
futures.put(c, c.close());
| null | train | train | 2015-08-21T09:37:24 | 2015-07-24T08:56:07Z | chrisprobst | val |
netty/netty/4131_4140 | netty/netty | netty/netty/4131 | netty/netty/4140 | [
"timestamp(timedelta=11.0, similarity=0.8811317741985843)"
] | fb0fe111ac0dad760e3091a893bc4d5e452835af | bd50d92cf3707a18f22aa5f0182627620012504e | [
"@rkapsi - Is `PlatformDependent.throwException(null)` called from Netty code, and if so do you know where from?\n",
"@Scottmitch no, not from Netty code. It was some code I wrote and one can argue it was my own fault (given the nature of that class) but I wish it'd have failed more gracefully.\n",
"@rkapsi - H... | [
"Maybe renaming t to cause might make the exception message more sense?\n",
"Will do.\n"
] | 2015-08-26T21:51:48Z | [
"defect"
] | PlatformDependent.throwException() with null argument takes down the JVM | I'm not sure if it's a bug or feature but it's certainly not easy to debug.
Netty 4.1.0-Beta6-SNAPSHOT
```
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f80fbe956c9, pid=1196, tid=140191066928896
#
# JRE version: Java(TM) SE Runtime Environment (8.0_40-b25) (build 1.8.0_40-b25)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.40-b25 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# V [libjvm.so+0x5756c9] Exceptions::_throw(Thread*, char const*, int, Handle, char const*)+0x1d9
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /<path>/hs_err_pid1196.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/
#
```
| [
"common/src/main/java/io/netty/util/internal/PlatformDependent0.java"
] | [
"common/src/main/java/io/netty/util/internal/PlatformDependent0.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java
index e72627a18d3..ef9e3b09186 100644
--- a/common/src/main/java/io/netty/util/internal/PlatformDependent0.java
+++ b/common/src/main/java/io/netty/util/internal/PlatformDependent0.java
@@ -30,6 +30,8 @@
import java.util.concurrent.atomic.AtomicLongFieldUpdater;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
/**
* The {@link PlatformDependent} operations which requires access to {@code sun.misc.*}.
*/
@@ -137,8 +139,9 @@ static boolean hasUnsafe() {
return UNSAFE != null;
}
- static void throwException(Throwable t) {
- UNSAFE.throwException(t);
+ static void throwException(Throwable cause) {
+ // JVM has been observed to crash when passing a null argument. See https://github.com/netty/netty/issues/4131.
+ UNSAFE.throwException(checkNotNull(cause, "cause"));
}
static void freeDirectBuffer(ByteBuffer buffer) {
| null | train | train | 2015-08-27T08:27:44 | 2015-08-24T20:50:18Z | rkapsi | val |
netty/netty/3884_4142 | netty/netty | netty/netty/3884 | netty/netty/4142 | [
"timestamp(timedelta=8.0, similarity=0.9799484004742384)"
] | 33001e84ff6eefbe7f1d67a6f18efe76721b5cab | 824cf42734077b8267c17d354fea3a5bfaf52bea | [
"@benevans @trustin - Any thoughts?\n",
"/cc @jestan \n",
"@jestan - Any objections to me removing this code?\n",
"@Scottmitch Please go ahead. We can't wait forever.\n",
"@trustin - Will do :)\n"
] | [] | 2015-08-26T21:59:38Z | [
"defect"
] | OioSctpChannel read iterating over selected keys | I'm curious why the [OioSctpChannel](https://github.com/netty/netty/blob/4.1/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java#L187) is iterating over `readSelector.selectedKeys()` but not using the `SelectionKey`. It seems like the assumption is the number of keys that are active is an indication of how many read operation should be done?
| [
"transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java"
] | [
"transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java"
] | [] | diff --git a/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java b/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java
index 6416ea4e60e..01a79d65ad5 100755
--- a/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java
+++ b/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java
@@ -184,39 +184,32 @@ protected int doReadMessages(List<Object> msgs) throws Exception {
return readMessages;
}
- Set<SelectionKey> reableKeys = readSelector.selectedKeys();
- try {
- for (SelectionKey ignored : reableKeys) {
- RecvByteBufAllocator.Handle allocHandle = this.allocHandle;
- if (allocHandle == null) {
- this.allocHandle = allocHandle = config().getRecvByteBufAllocator().newHandle();
- }
- ByteBuf buffer = allocHandle.allocate(config().getAllocator());
- boolean free = true;
+ RecvByteBufAllocator.Handle allocHandle = this.allocHandle;
+ if (allocHandle == null) {
+ this.allocHandle = allocHandle = config().getRecvByteBufAllocator().newHandle();
+ }
+ ByteBuf buffer = allocHandle.allocate(config().getAllocator());
+ boolean free = true;
- try {
- ByteBuffer data = buffer.nioBuffer(buffer.writerIndex(), buffer.writableBytes());
- MessageInfo messageInfo = ch.receive(data, null, notificationHandler);
- if (messageInfo == null) {
- return readMessages;
- }
+ try {
+ ByteBuffer data = buffer.nioBuffer(buffer.writerIndex(), buffer.writableBytes());
+ MessageInfo messageInfo = ch.receive(data, null, notificationHandler);
+ if (messageInfo == null) {
+ return readMessages;
+ }
- data.flip();
- msgs.add(new SctpMessage(messageInfo, buffer.writerIndex(buffer.writerIndex() + data.remaining())));
- free = false;
- readMessages ++;
- } catch (Throwable cause) {
- PlatformDependent.throwException(cause);
- } finally {
- int bytesRead = buffer.readableBytes();
- allocHandle.record(bytesRead);
- if (free) {
- buffer.release();
- }
- }
+ data.flip();
+ msgs.add(new SctpMessage(messageInfo, buffer.writerIndex(buffer.writerIndex() + data.remaining())));
+ free = false;
+ readMessages ++;
+ } catch (Throwable cause) {
+ PlatformDependent.throwException(cause);
+ } finally {
+ int bytesRead = buffer.readableBytes();
+ allocHandle.record(bytesRead);
+ if (free) {
+ buffer.release();
}
- } finally {
- reableKeys.clear();
}
return readMessages;
}
| null | train | train | 2015-08-26T22:23:42 | 2015-06-12T19:32:35Z | Scottmitch | val |
netty/netty/4147_4157 | netty/netty | netty/netty/4147 | netty/netty/4157 | [
"timestamp(timedelta=19.0, similarity=0.9035731286287193)"
] | 1ba087bc86a68c2013361bc2fcfdebe1ad9b290f | 1a2c020c2ced837be1e41021da4879f0029832a6 | [
"@trustin should I take care?\n\n> Am 27.08.2015 um 05:11 schrieb Trustin Lee notifications@github.com:\n> \n> It seems like some users are observing an edge case where the WeakOrderQueue and Stack grow beyond its max capacity. Although we should fix it, there should be a way for a user to disable recycling complet... | [
"Should we stick to this default value? /cc @normanmaurer @tea-dragon \n",
"@trustin for now yes. Let us use a new pr if we want to adjust it\n"
] | 2015-08-28T05:37:50Z | [
"feature"
] | Provide a way to disable recycling completely | It seems like some users are observing an edge case where the `WeakOrderQueue` and `Stack` grow beyond its max capacity. Although we should fix it, there should be a way for a user to disable recycling completely so that he or she can perform with-recycler vs. without-recycler comparison.
/cc @ninja-
| [
"common/src/main/java/io/netty/util/Recycler.java"
] | [
"common/src/main/java/io/netty/util/Recycler.java"
] | [
"common/src/test/java/io/netty/util/RecyclerTest.java"
] | diff --git a/common/src/main/java/io/netty/util/Recycler.java b/common/src/main/java/io/netty/util/Recycler.java
index edc7672c7c6..1e0bc39e260 100644
--- a/common/src/main/java/io/netty/util/Recycler.java
+++ b/common/src/main/java/io/netty/util/Recycler.java
@@ -36,8 +36,11 @@ public abstract class Recycler<T> {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(Recycler.class);
+ private static final Handle NOOP_HANDLE = new Handle() { };
private static final AtomicInteger ID_GENERATOR = new AtomicInteger(Integer.MIN_VALUE);
private static final int OWN_THREAD_ID = ID_GENERATOR.getAndIncrement();
+ // TODO: Some arbitrary large number - should adjust as we get more production experience.
+ private static final int DEFAULT_INITIAL_MAX_CAPACITY = 262144;
private static final int DEFAULT_MAX_CAPACITY;
private static final int INITIAL_CAPACITY;
@@ -45,15 +48,19 @@ public abstract class Recycler<T> {
// In the future, we might have different maxCapacity for different object types.
// e.g. io.netty.recycler.maxCapacity.writeTask
// io.netty.recycler.maxCapacity.outboundBuffer
- int maxCapacity = SystemPropertyUtil.getInt("io.netty.recycler.maxCapacity.default", 0);
- if (maxCapacity <= 0) {
- // TODO: Some arbitrary large number - should adjust as we get more production experience.
- maxCapacity = 262144;
+ int maxCapacity = SystemPropertyUtil.getInt("io.netty.recycler.maxCapacity.default",
+ DEFAULT_INITIAL_MAX_CAPACITY);
+ if (maxCapacity < 0) {
+ maxCapacity = DEFAULT_INITIAL_MAX_CAPACITY;
}
DEFAULT_MAX_CAPACITY = maxCapacity;
if (logger.isDebugEnabled()) {
- logger.debug("-Dio.netty.recycler.maxCapacity.default: {}", DEFAULT_MAX_CAPACITY);
+ if (DEFAULT_MAX_CAPACITY == 0) {
+ logger.debug("-Dio.netty.recycler.maxCapacity.default: disabled");
+ } else {
+ logger.debug("-Dio.netty.recycler.maxCapacity.default: {}", DEFAULT_MAX_CAPACITY);
+ }
}
INITIAL_CAPACITY = Math.min(DEFAULT_MAX_CAPACITY, 256);
@@ -77,6 +84,9 @@ protected Recycler(int maxCapacity) {
@SuppressWarnings("unchecked")
public final T get() {
+ if (maxCapacity == 0) {
+ return newObject(NOOP_HANDLE);
+ }
Stack<T> stack = threadLocal.get();
DefaultHandle handle = stack.pop();
if (handle == null) {
@@ -87,6 +97,10 @@ public final T get() {
}
public final boolean recycle(T o, Handle handle) {
+ if (handle == NOOP_HANDLE) {
+ return false;
+ }
+
DefaultHandle h = (DefaultHandle) handle;
if (h.stack.parent != this) {
return false;
| diff --git a/common/src/test/java/io/netty/util/RecyclerTest.java b/common/src/test/java/io/netty/util/RecyclerTest.java
index e90957034c6..cae4cc2e34e 100644
--- a/common/src/test/java/io/netty/util/RecyclerTest.java
+++ b/common/src/test/java/io/netty/util/RecyclerTest.java
@@ -40,6 +40,15 @@ public void testRecycle() {
object2.recycle();
}
+ @Test
+ public void testRecycleDisable() {
+ DisabledRecyclableObject object = DisabledRecyclableObject.newInstance();
+ object.recycle();
+ DisabledRecyclableObject object2 = DisabledRecyclableObject.newInstance();
+ assertNotSame(object, object2);
+ object2.recycle();
+ }
+
static final class RecyclableObject {
private static final Recycler<RecyclableObject> RECYCLER = new Recycler<RecyclableObject>() {
@@ -64,6 +73,30 @@ public void recycle() {
}
}
+ static final class DisabledRecyclableObject {
+
+ private static final Recycler<DisabledRecyclableObject> RECYCLER = new Recycler<DisabledRecyclableObject>(-1) {
+ @Override
+ protected DisabledRecyclableObject newObject(Handle handle) {
+ return new DisabledRecyclableObject(handle);
+ }
+ };
+
+ private final Recycler.Handle handle;
+
+ private DisabledRecyclableObject(Recycler.Handle handle) {
+ this.handle = handle;
+ }
+
+ public static DisabledRecyclableObject newInstance() {
+ return RECYCLER.get();
+ }
+
+ public void recycle() {
+ RECYCLER.recycle(this, handle);
+ }
+ }
+
/**
* Test to make sure bug #2848 never happens again
* https://github.com/netty/netty/issues/2848
| train | train | 2015-08-28T14:54:00 | 2015-08-27T03:11:27Z | trustin | val |
netty/netty/4118_4163 | netty/netty | netty/netty/4118 | netty/netty/4163 | [
"timestamp(timedelta=7884.0, similarity=0.8773752337812172)"
] | 544ee95e582e493ff3c4d484284e82615f0462df | f6f0674ec2ff1caabd573a6cd6fd55ef9b256e19 | [] | [
"+1\n",
"would we need to check if the peer.finishReadFuture == peerReadFuture and only set to null in this case ? A.k.a an atomic operation.\n",
"Good catch!\n",
"Please try to use `PlatformDependent.newAtomicReferenceFieldUpdater` first in a static block like we do in other places like:\n\nhttps://github.c... | 2015-08-28T16:47:08Z | [
"defect"
] | LocalChannel sequencing issue leading to HTTP/2 unit test failure | There is a sequencing issue in LocalChannel that is resulting in a unit test failure as picked up on the CI server [Http2ConnectionRoundtripTest.noMoreStreamIdsShouldSendGoAway](https://garage.netty.io/teamcity/viewLog.html?buildId=593&tab=buildResultsDiv&buildTypeId=netty_build_oraclejdk7).
I have a PR pending approval.
| [
"transport/src/main/java/io/netty/channel/local/LocalChannel.java"
] | [
"transport/src/main/java/io/netty/channel/local/LocalChannel.java"
] | [
"transport/src/test/java/io/netty/channel/local/LocalChannelTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/local/LocalChannel.java b/transport/src/main/java/io/netty/channel/local/LocalChannel.java
index 96e0acbfac9..59fc04480fa 100644
--- a/transport/src/main/java/io/netty/channel/local/LocalChannel.java
+++ b/transport/src/main/java/io/netty/channel/local/LocalChannel.java
@@ -27,6 +27,7 @@
import io.netty.channel.EventLoop;
import io.netty.channel.SingleThreadEventLoop;
import io.netty.util.ReferenceCountUtil;
+import io.netty.util.concurrent.Future;
import io.netty.util.internal.InternalThreadLocalMap;
import io.netty.util.internal.OneTimeTask;
import io.netty.util.internal.PlatformDependent;
@@ -37,6 +38,7 @@
import java.nio.channels.ConnectionPendingException;
import java.nio.channels.NotYetConnectedException;
import java.util.Queue;
+import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
/**
* A {@link Channel} for the local transport.
@@ -45,8 +47,9 @@ public class LocalChannel extends AbstractChannel {
private enum State { OPEN, BOUND, CONNECTED, CLOSED }
+ @SuppressWarnings({ "rawtypes" })
+ private static final AtomicReferenceFieldUpdater<LocalChannel, Future> FINISH_READ_FUTURE_UPDATER;
private static final ChannelMetadata METADATA = new ChannelMetadata(false);
-
private static final int MAX_READER_STACK_DEPTH = 8;
private final ChannelConfig config = new DefaultChannelConfig(this);
@@ -81,6 +84,18 @@ public void run() {
private volatile boolean readInProgress;
private volatile boolean registerInProgress;
private volatile boolean writeInProgress;
+ private volatile Future<?> finishReadFuture;
+
+ static {
+ @SuppressWarnings({ "rawtypes" })
+ AtomicReferenceFieldUpdater<LocalChannel, Future> finishReadFutureUpdater =
+ PlatformDependent.newAtomicReferenceFieldUpdater(LocalChannel.class, "finishReadFuture");
+ if (finishReadFutureUpdater == null) {
+ finishReadFutureUpdater =
+ AtomicReferenceFieldUpdater.newUpdater(LocalChannel.class, Future.class, "finishReadFuture");
+ }
+ FINISH_READ_FUTURE_UPDATER = finishReadFutureUpdater;
+ }
public LocalChannel() {
super(null);
@@ -328,16 +343,37 @@ private void finishPeerRead(final LocalChannel peer) {
if (peer.eventLoop() == eventLoop() && !peer.writeInProgress) {
finishPeerRead0(peer);
} else {
- peer.eventLoop().execute(new OneTimeTask() {
- @Override
- public void run() {
- finishPeerRead0(peer);
- }
- });
+ runFinishPeerReadTask(peer);
}
}
- private static void finishPeerRead0(LocalChannel peer) {
+ private void runFinishPeerReadTask(final LocalChannel peer) {
+ // If the peer is writing, we must wait until after reads are completed for that peer before we can read. So
+ // we keep track of the task, and coordinate later that our read can't happen until the peer is done.
+ final Runnable finishPeerReadTask = new OneTimeTask() {
+ @Override
+ public void run() {
+ finishPeerRead0(peer);
+ }
+ };
+ if (peer.writeInProgress) {
+ peer.finishReadFuture = peer.eventLoop().submit(finishPeerReadTask);
+ } else {
+ peer.eventLoop().execute(finishPeerReadTask);
+ }
+ }
+
+ private void finishPeerRead0(LocalChannel peer) {
+ Future<?> peerFinishReadFuture = peer.finishReadFuture;
+ if (peerFinishReadFuture != null) {
+ if (!peerFinishReadFuture.isDone()) {
+ runFinishPeerReadTask(peer);
+ return;
+ } else {
+ // Lazy unset to make sure we don't prematurely unset it while scheduling a new task.
+ FINISH_READ_FUTURE_UPDATER.compareAndSet(peer, peerFinishReadFuture, null);
+ }
+ }
ChannelPipeline peerPipeline = peer.pipeline();
if (peer.readInProgress) {
peer.readInProgress = false;
| diff --git a/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java b/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java
index 671b78d8459..cdbad06a658 100644
--- a/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java
+++ b/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java
@@ -59,7 +59,7 @@ public class LocalChannelTest {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(LocalChannelTest.class);
- private static final String LOCAL_ADDR_ID = "test.id";
+ private static final LocalAddress TEST_ADDRESS = new LocalAddress("test.id");
private static EventLoopGroup group1;
private static EventLoopGroup group2;
@@ -85,7 +85,6 @@ public static void afterClass() throws InterruptedException {
@Test
public void testLocalAddressReuse() throws Exception {
for (int i = 0; i < 2; i ++) {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
@@ -106,11 +105,11 @@ public void initChannel(LocalChannel ch) throws Exception {
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).sync().channel();
+ sc = sb.bind(TEST_ADDRESS).sync().channel();
final CountDownLatch latch = new CountDownLatch(1);
// Connect to the server
- cc = cb.connect(addr).sync().channel();
+ cc = cb.connect(sc.localAddress()).sync().channel();
final Channel ccCpy = cc;
cc.eventLoop().execute(new Runnable() {
@Override
@@ -129,7 +128,7 @@ public void run() {
assertNull(String.format(
"Expected null, got channel '%s' for local address '%s'",
- LocalChannelRegistry.get(addr), addr), LocalChannelRegistry.get(addr));
+ LocalChannelRegistry.get(TEST_ADDRESS), TEST_ADDRESS), LocalChannelRegistry.get(TEST_ADDRESS));
} finally {
closeChannel(cc);
closeChannel(sc);
@@ -139,7 +138,6 @@ public void run() {
@Test
public void testWriteFailsFastOnClosedChannel() throws Exception {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
@@ -160,10 +158,10 @@ public void initChannel(LocalChannel ch) throws Exception {
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).sync().channel();
+ sc = sb.bind(TEST_ADDRESS).sync().channel();
// Connect to the server
- cc = cb.connect(addr).sync().channel();
+ cc = cb.connect(sc.localAddress()).sync().channel();
// Close the channel and write something.
cc.close().sync();
@@ -189,7 +187,6 @@ public void initChannel(LocalChannel ch) throws Exception {
@Test
public void testServerCloseChannelSameEventLoop() throws Exception {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
final CountDownLatch latch = new CountDownLatch(1);
ServerBootstrap sb = new ServerBootstrap()
.group(group2)
@@ -204,7 +201,7 @@ protected void messageReceived(ChannelHandlerContext ctx, Object msg) throws Exc
Channel sc = null;
Channel cc = null;
try {
- sc = sb.bind(addr).sync().channel();
+ sc = sb.bind(TEST_ADDRESS).sync().channel();
Bootstrap b = new Bootstrap()
.group(group2)
@@ -215,7 +212,7 @@ protected void messageReceived(ChannelHandlerContext ctx, Object msg) throws Exc
// discard
}
});
- cc = b.connect(addr).sync().channel();
+ cc = b.connect(sc.localAddress()).sync().channel();
cc.writeAndFlush(new Object());
assertTrue(latch.await(5, SECONDS));
} finally {
@@ -226,7 +223,6 @@ protected void messageReceived(ChannelHandlerContext ctx, Object msg) throws Exc
@Test
public void localChannelRaceCondition() throws Exception {
- final LocalAddress address = new LocalAddress(LOCAL_ADDR_ID);
final CountDownLatch closeLatch = new CountDownLatch(1);
final EventLoopGroup clientGroup = new DefaultEventLoopGroup(1) {
@Override
@@ -271,7 +267,7 @@ protected void initChannel(Channel ch) throws Exception {
closeLatch.countDown();
}
}).
- bind(address).
+ bind(TEST_ADDRESS).
sync().channel();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(clientGroup).
@@ -282,7 +278,7 @@ protected void initChannel(Channel ch) throws Exception {
/* Do nothing */
}
});
- ChannelFuture future = bootstrap.connect(address);
+ ChannelFuture future = bootstrap.connect(sc.localAddress());
assertTrue("Connection should finish, not time out", future.await(200));
cc = future.channel();
} finally {
@@ -294,7 +290,6 @@ protected void initChannel(Channel ch) throws Exception {
@Test
public void testReRegister() {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
@@ -315,10 +310,10 @@ public void initChannel(LocalChannel ch) throws Exception {
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).syncUninterruptibly().channel();
+ sc = sb.bind(TEST_ADDRESS).syncUninterruptibly().channel();
// Connect to the server
- cc = cb.connect(addr).syncUninterruptibly().channel();
+ cc = cb.connect(sc.localAddress()).syncUninterruptibly().channel();
cc.deregister().syncUninterruptibly();
} finally {
@@ -329,7 +324,6 @@ public void initChannel(LocalChannel ch) throws Exception {
@Test
public void testCloseInWritePromiseCompletePreservesOrder() throws InterruptedException {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
final CountDownLatch messageLatch = new CountDownLatch(2);
@@ -363,10 +357,10 @@ public void channelInactive(ChannelHandlerContext ctx) throws Exception {
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).syncUninterruptibly().channel();
+ sc = sb.bind(TEST_ADDRESS).syncUninterruptibly().channel();
// Connect to the server
- cc = cb.connect(addr).syncUninterruptibly().channel();
+ cc = cb.connect(sc.localAddress()).syncUninterruptibly().channel();
final Channel ccCpy = cc;
// Make sure a write operation is executed in the eventloop
@@ -397,7 +391,6 @@ public void operationComplete(ChannelFuture future) throws Exception {
@Test
public void testWriteInWritePromiseCompletePreservesOrder() throws InterruptedException {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
final CountDownLatch messageLatch = new CountDownLatch(2);
@@ -428,10 +421,10 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).syncUninterruptibly().channel();
+ sc = sb.bind(TEST_ADDRESS).syncUninterruptibly().channel();
// Connect to the server
- cc = cb.connect(addr).syncUninterruptibly().channel();
+ cc = cb.connect(sc.localAddress()).syncUninterruptibly().channel();
final Channel ccCpy = cc;
// Make sure a write operation is executed in the eventloop
@@ -462,7 +455,6 @@ public void operationComplete(ChannelFuture future) throws Exception {
@Test
public void testPeerWriteInWritePromiseCompleteDifferentEventLoopPreservesOrder() throws InterruptedException {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
final CountDownLatch messageLatch = new CountDownLatch(2);
@@ -510,10 +502,10 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).syncUninterruptibly().channel();
+ sc = sb.bind(TEST_ADDRESS).syncUninterruptibly().channel();
// Connect to the server
- cc = cb.connect(addr).syncUninterruptibly().channel();
+ cc = cb.connect(sc.localAddress()).syncUninterruptibly().channel();
assertTrue(serverChannelLatch.await(5, SECONDS));
final Channel ccCpy = cc;
@@ -544,7 +536,6 @@ public void operationComplete(ChannelFuture future) throws Exception {
@Test
public void testPeerWriteInWritePromiseCompleteSameEventLoopPreservesOrder() throws InterruptedException {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
final CountDownLatch messageLatch = new CountDownLatch(2);
@@ -593,10 +584,10 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).syncUninterruptibly().channel();
+ sc = sb.bind(TEST_ADDRESS).syncUninterruptibly().channel();
// Connect to the server
- cc = cb.connect(addr).syncUninterruptibly().channel();
+ cc = cb.connect(sc.localAddress()).syncUninterruptibly().channel();
assertTrue(serverChannelLatch.await(5, SECONDS));
final Channel ccCpy = cc;
@@ -630,7 +621,6 @@ public void operationComplete(ChannelFuture future) throws Exception {
@Test
public void testClosePeerInWritePromiseCompleteSameEventLoopPreservesOrder() throws InterruptedException {
- LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
Bootstrap cb = new Bootstrap();
ServerBootstrap sb = new ServerBootstrap();
final CountDownLatch messageLatch = new CountDownLatch(2);
@@ -673,10 +663,10 @@ public void channelInactive(ChannelHandlerContext ctx) throws Exception {
Channel cc = null;
try {
// Start server
- sc = sb.bind(addr).syncUninterruptibly().channel();
+ sc = sb.bind(TEST_ADDRESS).syncUninterruptibly().channel();
// Connect to the server
- cc = cb.connect(addr).syncUninterruptibly().channel();
+ cc = cb.connect(sc.localAddress()).syncUninterruptibly().channel();
assertTrue(serverChannelLatch.await(5, SECONDS));
| train | train | 2015-08-28T18:24:09 | 2015-08-21T02:30:02Z | Scottmitch | val |
netty/netty/2677_4166 | netty/netty | netty/netty/2677 | netty/netty/4166 | [
"timestamp(timedelta=52.0, similarity=0.8607263387581197)"
] | 7d083ef6d662d043bac7f664283ee3b94e0bfab3 | 6b51bdb6df32783e6e803f80bce1bb8e5842e13b | [
"@jpinner I may miss something but I think in netty 4+ we don't need the synchronized stuff at all because of the new thread-model. Can you confirm ?\n",
"@jpinner ping :)\n",
"@jpinner could you please check so we can fix this... Thanks !\n",
"Let me fix this\n"
] | [] | 2015-08-28T20:31:17Z | [
"defect"
] | Inconsistent synchronization of some variables in SpdySessionHandler | 1. Field `io.netty.handler.codec.spdy.SpdySessionHandler.lastGoodStreamId`
Synchronized 50% of the time:
Unsynchronized access at SpdySessionHandler.java:[line 148]
Unsynchronized access at SpdySessionHandler.java:[line 243]
Synchronized access at SpdySessionHandler.java:[line 736]
Synchronized access at SpdySessionHandler.java:[line 851]
2. Field `io.netty.handler.codec.spdy.SpdySessionHandler.receivedGoAwayFrame`
Synchronized 50% of the time
Unsynchronized access at SpdySessionHandler.java:[line 361]
Synchronized access at SpdySessionHandler.java:[line 723]
3. Field `io.netty.handler.codec.spdy.SpdySessionHandler.localConcurrentStreams`
Synchronized 50% of the time
Unsynchronized access at SpdySessionHandler.java:[line 598]
Synchronized access at SpdySessionHandler.java:[line 728]
4. Field `io.netty.handler.codec.spdy.SpdySessionHandler.sentGoAwayFrame`
Synchronized 75% of the time
Unsynchronized access at SpdySessionHandler.java:[line 150]
Synchronized access at SpdySessionHandler.java:[line 723]
Synchronized access at SpdySessionHandler.java:[line 849]
Synchronized access at SpdySessionHandler.java:[line 850]
5. Field `io.netty.handler.codec.spdy.SpdySessionHandler.remoteConcurrentStreams`
Synchronized 50% of the time
Unsynchronized access at SpdySessionHandler.java:[line 318]
Synchronized access at SpdySessionHandler.java:[line 728]
Message from FindBugs:
The fields of this class appear to be accessed inconsistently with respect to synchronization. This bug report indicates that the bug pattern detector judged that
The class contains a mix of locked and unlocked accesses,
The class is not annotated as javax.annotation.concurrent.NotThreadSafe,
At least one locked access was performed by one of the class's own methods, and
The number of unsynchronized field accesses (reads and writes) was no more than one third of all accesses, with writes being weighed twice as high as reads
A typical bug matching this bug pattern is forgetting to synchronize one of the methods in a class that is intended to be thread-safe.
You can select the nodes labeled "Unsynchronized access" to show the code locations where the detector believed that a field was accessed without synchronization.
Note that there are various sources of inaccuracy in this detector; for example, the detector cannot statically detect all situations in which a lock is held. Also, even when the detector is accurate in distinguishing locked vs. unlocked accesses, the code in question may still be correct.
@trustin @normanmaurer @fredericBregier @jpinner
Please, check it out.
| [
"codec-http/src/main/java/io/netty/handler/codec/spdy/SpdySessionHandler.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/spdy/SpdySessionHandler.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdySessionHandler.java b/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdySessionHandler.java
index 6e540256321..338456be5a7 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdySessionHandler.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdySessionHandler.java
@@ -700,21 +700,21 @@ private boolean isRemoteInitiatedId(int id) {
}
// need to synchronize to prevent new streams from being created while updating active streams
- private synchronized void updateInitialSendWindowSize(int newInitialWindowSize) {
+ private void updateInitialSendWindowSize(int newInitialWindowSize) {
int deltaWindowSize = newInitialWindowSize - initialSendWindowSize;
initialSendWindowSize = newInitialWindowSize;
spdySession.updateAllSendWindowSizes(deltaWindowSize);
}
// need to synchronize to prevent new streams from being created while updating active streams
- private synchronized void updateInitialReceiveWindowSize(int newInitialWindowSize) {
+ private void updateInitialReceiveWindowSize(int newInitialWindowSize) {
int deltaWindowSize = newInitialWindowSize - initialReceiveWindowSize;
initialReceiveWindowSize = newInitialWindowSize;
spdySession.updateAllReceiveWindowSizes(deltaWindowSize);
}
// need to synchronize accesses to sentGoAwayFrame, lastGoodStreamId, and initial window sizes
- private synchronized boolean acceptStream(
+ private boolean acceptStream(
int streamId, byte priority, boolean remoteSideClosed, boolean localSideClosed) {
// Cannot initiate any new streams after receiving or sending GOAWAY
if (receivedGoAwayFrame || sentGoAwayFrame) {
@@ -833,7 +833,7 @@ private void sendGoAwayFrame(ChannelHandlerContext ctx, ChannelPromise future) {
// FIXME: Close the connection forcibly after timeout.
}
- private synchronized ChannelFuture sendGoAwayFrame(
+ private ChannelFuture sendGoAwayFrame(
ChannelHandlerContext ctx, SpdySessionStatus status) {
if (!sentGoAwayFrame) {
sentGoAwayFrame = true;
| null | train | train | 2015-08-28T21:38:12 | 2014-07-19T22:39:39Z | idelpivnitskiy | val |
netty/netty/4174_4175 | netty/netty | netty/netty/4174 | netty/netty/4175 | [
"timestamp(timedelta=60.0, similarity=0.8817071786875241)"
] | 2d4a8a75bbd6746c0fb63728f3877d1985af68ab | 606201fcdd633fd5c2e7d6a677c052d1236490cd | [] | [
"I think this a copy and paste error. \n\nTCP_USER_TIMEOUT.\n",
"Remoce extra empty line\n",
"Remove extra empty line\n"
] | 2015-08-30T22:17:33Z | [
"feature"
] | Introduce ability to set TCP_USER_TIMEOUT in transport-native-epoll | Context:
Netty 4.0.31.Final-SNAPSHOT
I am using ChannelOption.SO_KEEPALIVE together with EpollChannelOption.TCP_KEEP\* in order to detect connection problems. Works as expected however when I keep writing to that connection e.g. every second, the connection will not become idle and will not be torn down by the OS.
I've modified transport-native-epoll to add setting for TCP_USER_TIMEOUT which allows connections to "fail fast".
Steps to reproduce:
Start a TCP server with following settings:
```
ch.config().setOption(ChannelOption.SO_KEEPALIVE, true);
ch.config().setOption(EpollChannelOption.TCP_KEEPCNT, 5);
ch.config().setOption(EpollChannelOption.TCP_KEEPIDLE, 1);
ch.config().setOption(EpollChannelOption.TCP_KEEPINTVL, 1);
```
Optionally schedule a fixed rate (1s) job that writes to the connection.
Start a client connection e.g. from a VM, then break the connection. Connection breakage is not detected if the job is scheduled.
Fix:
Add code to set option TCP_USER_TIMEOUT = 1000ms, observe that now connection is timeouted even if it's written to.
Tested on:
`Linux 3.13.0-39-generic #66-Ubuntu SMP Tue Oct 28 13:30:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux`
| [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/... | [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/... | [] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index cc1915fefa8..5fcc2e77ae9 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -1271,6 +1271,10 @@ JNIEXPORT void Java_io_netty_channel_epoll_Native_setTcpKeepCnt(JNIEnv* env, jcl
setOption(env, fd, IPPROTO_TCP, TCP_KEEPCNT, &optval, sizeof(optval));
}
+JNIEXPORT void Java_io_netty_channel_epoll_Native_setTcpUserTimeout(JNIEnv* env, jclass clazz, jint fd, jint optval) {
+ setOption(env, fd, IPPROTO_TCP, TCP_USER_TIMEOUT, &optval, sizeof(optval));
+}
+
JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_setIpFreeBind(JNIEnv* env, jclass clazz, jint fd, jint optval) {
setOption(env, fd, IPPROTO_IP, IP_FREEBIND, &optval, sizeof(optval));
}
@@ -1391,6 +1395,14 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_getTcpKeepCnt(JNIEnv*
return optval;
}
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_getTcpUserTimeout(JNIEnv* env, jclass clazz, jint fd) {
+ int optval;
+ if (getOption(env, fd, IPPROTO_TCP, TCP_USER_TIMEOUT, &optval, sizeof(optval)) == -1) {
+ return -1;
+ }
+ return optval;
+}
+
JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_isIpFreeBind(JNIEnv* env, jclass clazz, jint fd) {
int optval;
if (getOption(env, fd, IPPROTO_TCP, IP_FREEBIND, &optval, sizeof(optval)) == -1) {
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
index 4a3c5d19b55..1f231f2f80c 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
@@ -26,6 +26,7 @@ public final class EpollChannelOption<T> extends ChannelOption<T> {
public static final ChannelOption<Integer> TCP_KEEPIDLE = valueOf("TCP_KEEPIDLE");
public static final ChannelOption<Integer> TCP_KEEPINTVL = valueOf("TCP_KEEPINTVL");
public static final ChannelOption<Integer> TCP_KEEPCNT = valueOf("TCP_KEEPCNT");
+ public static final ChannelOption<Integer> TCP_USER_TIMEOUT = valueOf("TCP_USER_TIMEOUT");
public static final ChannelOption<Boolean> IP_FREEBIND = valueOf("IP_FREEBIND");
public static final ChannelOption<DomainSocketReadMode> DOMAIN_SOCKET_READ_MODE =
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java
index 2f89403a1be..65337c3e23e 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannelConfig.java
@@ -94,6 +94,9 @@ public <T> T getOption(ChannelOption<T> option) {
if (option == EpollChannelOption.TCP_KEEPCNT) {
return (T) Integer.valueOf(getTcpKeepCnt());
}
+ if (option == EpollChannelOption.TCP_USER_TIMEOUT) {
+ return (T) Integer.valueOf(getTcpUserTimeout());
+ }
return super.getOption(option);
}
@@ -127,6 +130,8 @@ public <T> boolean setOption(ChannelOption<T> option, T value) {
setTcpKeepCntl((Integer) value);
} else if (option == EpollChannelOption.TCP_KEEPINTVL) {
setTcpKeepIntvl((Integer) value);
+ } else if (option == EpollChannelOption.TCP_USER_TIMEOUT) {
+ setTcpUserTimeout((Integer) value);
} else {
return super.setOption(option, value);
}
@@ -205,6 +210,13 @@ public int getTcpKeepCnt() {
return Native.getTcpKeepCnt(channel.fd().intValue());
}
+ /**
+ * Get the {@code TCP_USER_TIMEOUT} option on the socket. See {@code man 7 tcp} for more details.
+ */
+ public int getTcpUserTimeout() {
+ return Native.getTcpUserTimeout(channel.fd().intValue());
+ }
+
@Override
public EpollSocketChannelConfig setKeepAlive(boolean keepAlive) {
Native.setKeepAlive(channel.fd().intValue(), keepAlive ? 1 : 0);
@@ -297,6 +309,14 @@ public EpollSocketChannelConfig setTcpKeepCntl(int probes) {
return this;
}
+ /**
+ * Set the {@code TCP_USER_TIMEOUT} option on the socket. See {@code man 7 tcp} for more details.
+ */
+ public EpollSocketChannelConfig setTcpUserTimeout(int milliseconds) {
+ Native.setTcpUserTimeout(channel.fd().intValue(), milliseconds);
+ return this;
+ }
+
@Override
public boolean isAllowHalfClosure() {
return allowHalfClosure;
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index 4def6e42b0e..3a7136809d2 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -635,6 +635,7 @@ public static void shutdown(int fd, boolean read, boolean write) throws IOExcept
public static native int getTcpKeepIdle(int fd);
public static native int getTcpKeepIntvl(int fd);
public static native int getTcpKeepCnt(int fd);
+ public static native int getTcpUserTimeout(int milliseconds);
public static native int getSoError(int fd);
public static native int isIpFreeBind(int fd);
@@ -652,6 +653,7 @@ public static void shutdown(int fd, boolean read, boolean write) throws IOExcept
public static native void setTcpKeepIdle(int fd, int seconds);
public static native void setTcpKeepIntvl(int fd, int seconds);
public static native void setTcpKeepCnt(int fd, int probes);
+ public static native void setTcpUserTimeout(int fd, int milliseconds);
public static native void setIpFreeBind(int fd, int freeBind);
public static void tcpInfo(int fd, EpollTcpInfo info) {
tcpInfo0(fd, info.info);
| null | train | train | 2015-08-30T20:38:35 | 2015-08-30T21:53:50Z | tomasol | val |
netty/netty/4185_4190 | netty/netty | netty/netty/4185 | netty/netty/4190 | [
"timestamp(timedelta=37.0, similarity=0.958631723650472)"
] | f1eddd6117f5e3109257f911d51b80250e17e11d | 64551ee03cfcd819007a83c77b8310a60a847b04 | [
"@blucas - Sorry for breaking you :(. I guess we have at least 2 options:\n1. Add a constructor to `DefaultSpdyHeaders` which takes a `boolean validateHeaders` argument which can make header name validation optional. We would then need to carry this through the SPDY codec to pass along the `boolean` where ever the... | [] | 2015-09-04T17:10:37Z | [
"defect"
] | SpdyHttpEncoder fails to convert HttpResponse to SpdyFrame | Netty Version: Latest master (f1eddd6117f5e3109257f911d51b80250e17e11d)
When `SpdyHttpEncoder` attempts to create an `SpdyHeadersFrame` from a `HttpResponse` an `IllegalArgumentException` is thrown if the original `HttpResponse` contains a header that includes uppercase characters. The `IllegalArgumentException` is thrown due to the additional validation check introduced by #4047.
Previous versions of the SPDY codec would handle this by converting the HTTP header name to lowercase before adding the header to the `SpdyHeadersFrame` (see [here](https://github.com/netty/netty/pull/176/files#diff-4fb47d56c527df13fe16a32a66d60d02R305)).
Not the greatest solution, but this could be 'fixed' by lowercasing the header name [here](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java#L175) and [here](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java#L275)
If you guys are comfortable with this solution (or have a better one), I would be more than happy to create a PR.
/cc @Scottmitch @jpinner - for obvious reasons :smiley:
| [
"codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java b/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java
index 8a62c1ac29d..e59afb8a95d 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/spdy/SpdyHttpEncoder.java
@@ -27,6 +27,7 @@
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.HttpResponse;
import io.netty.handler.codec.http.LastHttpContent;
+import io.netty.util.AsciiString;
import java.util.List;
import java.util.Map;
@@ -172,7 +173,7 @@ protected void encode(ChannelHandlerContext ctx, HttpObject msg, List<Object> ou
SpdyHeadersFrame spdyHeadersFrame = new DefaultSpdyHeadersFrame(currentStreamId);
spdyHeadersFrame.setLast(true);
for (Map.Entry<CharSequence, CharSequence> entry: trailers) {
- spdyHeadersFrame.headers().add(entry.getKey(), entry.getValue());
+ spdyHeadersFrame.headers().add(AsciiString.of(entry.getKey()).toLowerCase(), entry.getValue());
}
// Write DATA frame and append HEADERS frame
@@ -233,7 +234,7 @@ private SpdySynStreamFrame createSynStreamFrame(HttpRequest httpRequest) throws
// Transfer the remaining HTTP headers
for (Map.Entry<CharSequence, CharSequence> entry: httpHeaders) {
- frameHeaders.add(entry.getKey(), entry.getValue());
+ frameHeaders.add(AsciiString.of(entry.getKey()).toLowerCase(), entry.getValue());
}
currentStreamId = spdySynStreamFrame.streamId();
if (associatedToStreamId == 0) {
@@ -272,7 +273,7 @@ private SpdyHeadersFrame createHeadersFrame(HttpResponse httpResponse) throws Ex
// Transfer the remaining HTTP headers
for (Map.Entry<CharSequence, CharSequence> entry: httpHeaders) {
- spdyHeadersFrame.headers().add(entry.getKey(), entry.getValue());
+ spdyHeadersFrame.headers().add(AsciiString.of(entry.getKey()).toLowerCase(), entry.getValue());
}
currentStreamId = streamId;
| null | test | train | 2015-09-03T08:54:10 | 2015-09-03T19:54:48Z | blucas | val |
netty/netty/3721_4223 | netty/netty | netty/netty/3721 | netty/netty/4223 | [
"timestamp(timedelta=11.0, similarity=0.859242328286967)"
] | b66b38d3e417324e4eae965795b94976baa9eb11 | 25f49603b1ecc0e0405653648d69a6c117828779 | [
"@ejona86 @Scottmitch @nmittler @louiscryan is this something we need to fix before 4.1.0.Final ?\n",
"I don't think so. We seem to be conforming to the spec since we're checking the size on the inbound frame.\n",
"@nmittler so closing ?\n",
"@ejona86 thoughts on closing?\n",
"wouldn't hurt to check the wri... | [] | 2015-09-14T20:20:21Z | [
"improvement"
] | http2: writePing does not verify data.readableBytes() == 8 | When [writing the ping frame](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java#L238), we aren't verifying that the length is 8, as required by the spec:
> Receipt of a PING frame with a length field value other than 8 MUST be treated as a connection error (Section 5.4.1) of type FRAME_SIZE_ERROR.
The check [is being performed](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java#L332) for inbound PING frames.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
index 035fc949e81..4b98985655c 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
@@ -14,9 +14,15 @@
*/
package io.netty.handler.codec.http2;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufAllocator;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.http2.Http2FrameReader.Configuration;
+
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_MAX_FRAME_SIZE;
import static io.netty.handler.codec.http2.Http2CodecUtil.FRAME_HEADER_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.INT_FIELD_LENGTH;
+import static io.netty.handler.codec.http2.Http2CodecUtil.PING_FRAME_PAYLOAD_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.PRIORITY_ENTRY_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.SETTINGS_INITIAL_WINDOW_SIZE;
import static io.netty.handler.codec.http2.Http2CodecUtil.SETTINGS_MAX_FRAME_SIZE;
@@ -39,10 +45,6 @@
import static io.netty.handler.codec.http2.Http2FrameTypes.RST_STREAM;
import static io.netty.handler.codec.http2.Http2FrameTypes.SETTINGS;
import static io.netty.handler.codec.http2.Http2FrameTypes.WINDOW_UPDATE;
-import io.netty.buffer.ByteBuf;
-import io.netty.buffer.ByteBufAllocator;
-import io.netty.channel.ChannelHandlerContext;
-import io.netty.handler.codec.http2.Http2FrameReader.Configuration;
/**
* A {@link Http2FrameReader} that supports all frame types defined by the HTTP/2 specification.
@@ -330,7 +332,7 @@ private void verifyPingFrame() throws Http2Exception {
if (streamId != 0) {
throw connectionError(PROTOCOL_ERROR, "A stream ID must be zero.");
}
- if (payloadLength != 8) {
+ if (payloadLength != PING_FRAME_PAYLOAD_LENGTH) {
throw connectionError(FRAME_SIZE_ERROR,
"Frame length %d incorrect size for ping.", payloadLength);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
index 628703b571f..29dbc33c44b 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
@@ -15,6 +15,13 @@
package io.netty.handler.codec.http2;
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPromise;
+import io.netty.handler.codec.http2.Http2CodecUtil.SimpleChannelPromiseAggregator;
+import io.netty.handler.codec.http2.Http2FrameWriter.Configuration;
+
import static io.netty.buffer.Unpooled.directBuffer;
import static io.netty.buffer.Unpooled.unmodifiableBuffer;
import static io.netty.buffer.Unpooled.unreleasableBuffer;
@@ -29,6 +36,7 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_UNSIGNED_INT;
import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_WEIGHT;
import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.PING_FRAME_PAYLOAD_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.PRIORITY_ENTRY_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.PRIORITY_FRAME_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.PUSH_PROMISE_FRAME_HEADER_LENGTH;
@@ -53,13 +61,6 @@
import static io.netty.handler.codec.http2.Http2FrameTypes.WINDOW_UPDATE;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
-import io.netty.buffer.ByteBuf;
-import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelHandlerContext;
-import io.netty.channel.ChannelPromise;
-import io.netty.handler.codec.http2.Http2CodecUtil.SimpleChannelPromiseAggregator;
-import io.netty.handler.codec.http2.Http2FrameWriter.Configuration;
-
/**
* A {@link Http2FrameWriter} that supports all frame types defined by the HTTP/2 specification.
*/
@@ -240,6 +241,7 @@ public ChannelFuture writePing(ChannelHandlerContext ctx, boolean ack, ByteBuf d
SimpleChannelPromiseAggregator promiseAggregator =
new SimpleChannelPromiseAggregator(promise, ctx.channel(), ctx.executor());
try {
+ verifyPingPayload(data);
Http2Flags flags = ack ? new Http2Flags().ack(true) : new Http2Flags();
ByteBuf buf = ctx.alloc().buffer(FRAME_HEADER_LENGTH);
writeFrameHeaderInternal(buf, data.readableBytes(), PING, flags, 0);
@@ -545,4 +547,10 @@ private static void verifyWindowSizeIncrement(int windowSizeIncrement) {
throw new IllegalArgumentException("WindowSizeIncrement must be >= 0");
}
}
+
+ private static void verifyPingPayload(ByteBuf data) {
+ if (data == null || data.readableBytes() != PING_FRAME_PAYLOAD_LENGTH) {
+ throw new IllegalArgumentException("Opaque data must be " + PING_FRAME_PAYLOAD_LENGTH + " bytes");
+ }
+ }
}
| null | test | train | 2015-09-14T22:16:36 | 2015-05-04T17:34:33Z | ejona86 | val |
netty/netty/4244_4245 | netty/netty | netty/netty/4244 | netty/netty/4245 | [
"timestamp(timedelta=102.0, similarity=0.867344004592763)"
] | 94bf412edbea73c0b1dbac4ada8919708c4f369e | 859ff46565cad8eefb851e733f9efa707e101b53 | [
"@Scottmitch @nmittler can you check?\n",
"fixed by https://github.com/netty/netty/pull/4245\n"
] | [] | 2015-09-21T16:09:33Z | [
"defect"
] | HttpConversionUtil.toHttp2Headers converts uri to path header unproperly | Netty version: 4.1.0.Beta6
Context:
When we use HttpConversionUtil.toHttp2Headers, if uri has urlencoded values, they are decoded unproperly in `:path` header.
I could't find any spec but below in https://httpwg.github.io/specs/rfc7540.html#HttpHeaders
> The :path pseudo-header field includes the path and query parts of the target URI (the path-absolute production and optionally a '?' character followed by the query production (see Sections 3.3 and 3.4 of [RFC3986]). A request in asterisk form includes the value '_' for the :path pseudo-header field.
> This pseudo-header field MUST NOT be empty for http or https URIs; http or https URIs that do not contain a path component MUST include a value of '/'. The exception to this rule is an OPTIONS request for an http or https URI that does not include a path component; these MUST include a :path pseudo-header field with a value of '_' (see [RFC7230], Section 5.3.4).
Steps to reproduce:
1. Create HttpMessage:
``` java
String uri = "https://www.anydomain.com/path%2B1?q=query%2B1&q2=query%2B2#fragment%2B1";
FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, uri);
```
2. convert using `HttpConversionUtil.toHttp2Headers`
``` java
Http2Headers http2Headers = HttpConversionUtil.toHttp2Headers(request);
```
3. get path by `http2Headers.get(HttpNames.PATH)`
``` java
assertThat(http2Headers.get(HttpNames.PATH), is("/path%2B1?q=query%2B1&q2=query%2B2#fragment%2B1");
//it fails, actual value is "/path+1?q=query+1&q2=query+2#fragment+1"
```
Opinion:
change `HttpConversionUtil.toHttp2Path` to use `getRawXXX()`
``` java
private static AsciiString toHttp2Path(URI uri) {
StringBuilder pathBuilder = new StringBuilder(length(uri.getRawPath()) +
length(uri.getRawQuery()) + length(uri.getRawFragment()) + 2);
if (!isNullOrEmpty(uri.getRawPath())) {
pathBuilder.append(uri.getRawPath());
}
if (!isNullOrEmpty(uri.getRawQuery())) {
pathBuilder.append('?');
pathBuilder.append(uri.getRawQuery());
}
if (!isNullOrEmpty(uri.getRawFragment())) {
pathBuilder.append('#');
pathBuilder.append(uri.getRawFragment());
}
String path = pathBuilder.toString();
return path.isEmpty() ? EMPTY_REQUEST_PATH : new AsciiString(path);
}
```
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java
index 7163f69f550..3231c50a926 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java
@@ -353,18 +353,18 @@ public static Http2Headers toHttp2Headers(HttpHeaders inHeaders, boolean validat
* <a href="https://tools.ietf.org/html/rfc7230#section-5.3">rfc7230, 5.3</a>.
*/
private static AsciiString toHttp2Path(URI uri) {
- StringBuilder pathBuilder = new StringBuilder(length(uri.getPath()) +
- length(uri.getQuery()) + length(uri.getFragment()) + 2);
- if (!isNullOrEmpty(uri.getPath())) {
- pathBuilder.append(uri.getPath());
+ StringBuilder pathBuilder = new StringBuilder(length(uri.getRawPath()) +
+ length(uri.getRawQuery()) + length(uri.getRawFragment()) + 2);
+ if (!isNullOrEmpty(uri.getRawPath())) {
+ pathBuilder.append(uri.getRawPath());
}
- if (!isNullOrEmpty(uri.getQuery())) {
+ if (!isNullOrEmpty(uri.getRawQuery())) {
pathBuilder.append('?');
- pathBuilder.append(uri.getQuery());
+ pathBuilder.append(uri.getRawQuery());
}
- if (!isNullOrEmpty(uri.getFragment())) {
+ if (!isNullOrEmpty(uri.getRawFragment())) {
pathBuilder.append('#');
- pathBuilder.append(uri.getFragment());
+ pathBuilder.append(uri.getRawFragment());
}
String path = pathBuilder.toString();
return path.isEmpty() ? EMPTY_REQUEST_PATH : new AsciiString(path);
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
index 0377788887d..9d10e0d3bca 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
@@ -157,6 +157,23 @@ public void testOriginFormRequestTargetHandled() throws Exception {
verifyHeadersOnly(http2Headers, writePromise, clientChannel.writeAndFlush(request, writePromise));
}
+ @Test
+ public void testOriginFormRequestTargetHandledFromUrlencodedUri() throws Exception {
+ bootstrapEnv(2, 1, 0);
+ final FullHttpRequest request = new DefaultFullHttpRequest(
+ HTTP_1_1, GET, "/where%2B0?q=now%2B0&f=then%2B0#section1%2B0");
+ final HttpHeaders httpHeaders = request.headers();
+ httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 5);
+ httpHeaders.set(HttpConversionUtil.ExtensionHeaderNames.SCHEME.text(), "http");
+ final Http2Headers http2Headers =
+ new DefaultHttp2Headers().method(new AsciiString("GET"))
+ .path(new AsciiString("/where%2B0?q=now%2B0&f=then%2B0#section1%2B0"))
+ .scheme(new AsciiString("http"));
+
+ ChannelPromise writePromise = newPromise();
+ verifyHeadersOnly(http2Headers, writePromise, clientChannel.writeAndFlush(request, writePromise));
+ }
+
@Test
public void testAbsoluteFormRequestTargetHandledFromHeaders() throws Exception {
bootstrapEnv(2, 1, 0);
| test | train | 2015-09-18T21:17:34 | 2015-09-21T14:03:17Z | alexpark7712 | val |
netty/netty/4242_4258 | netty/netty | netty/netty/4242 | netty/netty/4258 | [
"timestamp(timedelta=19.0, similarity=0.867944925222094)"
] | beb75f0a04617faf19d80c9d42bea38c0ee7b22d | e109c8ef20fc8027801582a71dd6f1ef5e360d0c | [
"@blucas Thanks for the report!\n\nRegarding the [writeBytes method](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L633), I believe the comment is wrong. I suspect it should be re-worded to something like \"The frame has data... | [] | 2015-09-23T14:45:53Z | [
"defect"
] | HTTP/2 FlowController allocated write and flush | Netty Version: lastest master (94bf412edbea73c0b1dbac4ada8919708c4f369e)
/cc @Scottmitch @nmittler
I have a serious issue (it is actually a blocker) that is preventing me from releasing the next version of our product (which will support HTTP/2 traffic). I would really appreciate it if this could be resolved quickly. I am more than willing to help out with this (submit a PR) as long as someone can help point me in the right direction to fixing this issue.
**Issue Details**
When sending a large response to a client, the flow controller fails to send the entire response. This is very similar to #4052, but is not solved by the fix for that issue.
The problem stems from the fact that the flow controller and state only write up to N allocated bytes [here](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L544). If there are still pending bytes to write, the flow controller should trigger another write, after flushing. Unfortunately, at this time, all it will do is flush the N allocated bytes, something must be done so that it will trigger another write (and subsequent flush). I don't know how it should accomplish that. I am aware there is a pending pull request for 'stream writability' (#4227), but I don't believe it goes far enough to fix this issue on its own.
**Example**
Given:
HttpServer which receives a request, generates a `FullHttpResponse`, and calls `writeAndFlush`. The pipeline is configured to convert the FullHttpResponse to HTTP/2 Frames.
Response / Channel / Stream Info:
Response size: 100,000 bytes
Channel WriteBuffer High Watermark: 65,535 bytes
Stream allocated bytes: 43,000 bytes
Given the scenario above:
1. `Http2ConnectionHandler` receives a call to [flush](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L209) the converted `FullHttpResponse` and tells the flow controller to write the pending bytes.
2. the flow controller calculates the number of bytes to write ([allocate](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L305)) for the stream.
3. the flow controller then triggers a [write N allocated bytes](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L310) for each stream.
4. DefaultState will [write](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L544) 43,000 bytes down the pipeline.
5. `Http2ConnectionHandler` will then trigger a [flush](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L211) to flush the 43,000 bytes to the socket.
So the problem is that only the first 43,000 bytes were sent to the socket. The other 57,000 bytes still need to be sent.
The reason we did not get this problem with #4052 was because the `Stream allocate bytes` was set to a value much closer to the `Channel WriteBuffer High Watermark`. That triggered a `channelWritabilityChanged` event, which ended up writing/flushing more and more of the response to the socket. Below is an example of what I mean:
Response / Channel / Stream Info:
Response size: 100,000 bytes
Channel WriteBuffer High Watermark: 65,535 bytes
Stream allocated bytes: 65,500 bytes
Given the scenario above:
1. `Http2ConnectionHandler` receives a call to [flush](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L209) the converted `FullHttpResponse` and tells the flow controller to write the pending bytes.
2. the flow controller calculates the number of bytes to write ([allocate](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L305)) for the stream.
3. the flow controller then triggers a [write N allocated bytes](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L310) for each stream.
4. DefaultState will [write](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L544) 65,500 bytes down the pipeline.
5. `Http2ConnectionHandler` will then trigger a [flush](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L211) to flush the 65,500 bytes to the socket.
6. `SslHandler` encrypts the 65,500 bytes, adding a further N number of bytes thus triggering a `channelWritabilityChanged` event.
7. [ChannelWritabilityChanged](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L441) event triggers further writes/flushes to the socket.
A side note to all of this... I spotted something that might be a bug, or maybe it is just incorrect documentation. The code [here](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L652) states that all the frame data was written, but the `if` is checking `frame.size() != 0`. Should it actually be `frame.size() == 0` ??? Or maybe the documentation needs clearing up?
**Reproducer**
@Scottmitch - The reproducer I gave you for #4052 can be used to produce the scenario I've stated above. In order to reproduce the issue, just request the file using Firefox (version I used was v40.0.3) instead of Chrome. The HTTP/2 codec sets the stream's allocated bytes much lower for Firefox than it does from Chrome.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index 33b4262b001..3f7f3260835 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -297,17 +297,22 @@ private int writableBytes(int requestedBytes) {
*/
@Override
public void writePendingBytes() throws Http2Exception {
- Http2Stream connectionStream = connection.connectionStream();
- int connectionWindowSize = writableBytes(state(connectionStream).windowSize());
-
- if (connectionWindowSize > 0) {
- // Allocate the bytes for the connection window to the streams, but do not write.
- allocateBytesForTree(connectionStream, connectionWindowSize);
- }
+ AbstractState connectionState = connectionState();
+ int connectionWindowSize;
+ do {
+ connectionWindowSize = writableBytes(connectionState.windowSize());
+
+ if (connectionWindowSize > 0) {
+ // Allocate the bytes for the connection window to the streams, but do not write.
+ allocateBytesForTree(connectionState.stream(), connectionWindowSize);
+ }
- // Now write all of the allocated bytes, must write as there may be empty frames with
- // EOS = true
- connection.forEachActiveStream(WRITE_ALLOCATED_BYTES);
+ // Write all of allocated bytes. We must call this even if no bytes are allocated as it is possible there
+ // are empty frames indicating the End Of Stream.
+ connection.forEachActiveStream(WRITE_ALLOCATED_BYTES);
+ } while (connectionState.streamableBytesForTree() > 0 &&
+ connectionWindowSize > 0 &&
+ ctx.channel().isWritable());
}
/**
| null | test | train | 2015-09-23T08:41:23 | 2015-09-20T11:45:43Z | blucas | val |
netty/netty/4265_4279 | netty/netty | netty/netty/4265 | netty/netty/4279 | [
"timestamp(timedelta=14.0, similarity=0.8620227261918824)"
] | 127886f469304f37ce39e5fff2412db54f8890e2 | d187462e394d4de2833c43bf680c8450e9aa0799 | [
"...why would you even do that\n",
"@willisblackburn thanks for the report. Will add a guard against this\n",
"@willisblackburn - I'm assuming you are using Netty 4.0?\n"
] | [
"In 4.1 we are not consistent in that this method `returns this;` and `add` throws an exception. I think we should be consistent in how we behave across versions (and maybe how we react to these 2 situations). So do we want to throw an exception, or just `return this;`?\n",
"hmm. when adding this check my thought... | 2015-09-25T15:54:03Z | [
"defect"
] | Adding DefaultHttpHeaders to itself creates infinite loop | Example:
```
public void test() {
HttpHeaders headers = new DefaultHttpHeaders();
headers.add("foo", "bar");
headers.add(headers);
// This will never end
headers.forEach(entry -> {});
}
```
| [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java"
] | [
"codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java"
] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java
index 2b706565083..3b3d9eca61b 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java
@@ -59,6 +59,9 @@ void validateHeaderName0(CharSequence headerName) {
@Override
public HttpHeaders add(HttpHeaders headers) {
if (headers instanceof DefaultHttpHeaders) {
+ if (headers == this) {
+ throw new IllegalArgumentException("can't add to itself.");
+ }
DefaultHttpHeaders defaultHttpHeaders = (DefaultHttpHeaders) headers;
HeaderEntry e = defaultHttpHeaders.head.after;
while (e != defaultHttpHeaders.head) {
@@ -74,6 +77,9 @@ public HttpHeaders add(HttpHeaders headers) {
@Override
public HttpHeaders set(HttpHeaders headers) {
if (headers instanceof DefaultHttpHeaders) {
+ if (headers == this) {
+ throw new IllegalArgumentException("can't add to itself.");
+ }
clear();
DefaultHttpHeaders defaultHttpHeaders = (DefaultHttpHeaders) headers;
HeaderEntry e = defaultHttpHeaders.head.after;
| diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
index 47302c7425d..e2eadc33cab 100644
--- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
@@ -67,4 +67,16 @@ public void testSetNullHeaderValueNotValidate() {
HttpHeaders headers = new DefaultHttpHeaders(false);
headers.set("test", (CharSequence) null);
}
+
+ @Test(expected = IllegalArgumentException.class)
+ public void testAddSelf() {
+ HttpHeaders headers = new DefaultHttpHeaders(false);
+ headers.add(headers);
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void testSetSelf() {
+ HttpHeaders headers = new DefaultHttpHeaders(false);
+ headers.set(headers);
+ }
}
| test | train | 2015-09-25T02:38:29 | 2015-09-23T20:52:27Z | willisblackburn | val |
netty/netty/4266_4282 | netty/netty | netty/netty/4266 | netty/netty/4282 | [
"timestamp(timedelta=80.0, similarity=0.8921734592383369)"
] | 747533408dbf3fa04fe0753e0f20cf80b8ac66ed | 60dbed85f22bbf3dae64023d14e792a1d840957d | [
"@nmittler - FYI\n",
"This is also related to https://github.com/netty/netty/issues/4242\n",
"@Scottmitch - Could you give an example for when this would happen? I haven't had this happen to me yet when testing HTTP/2 with Chrome/FF.\n",
"+1\n\n@Scottmitch as discussed offline, the first thing we need is a ne... | [
"revert this? It was more readable with the carriage return.\n",
"will do :)\n",
"Don't you still need `nextTotalWeight += child.weight`?\n",
"Oh you updated `stillHungry` to do that ... much better :)\n",
"I already moved into `stillHungry`. Seemed like it belonged there.\n",
"> Oh you updated stillHungr... | 2015-09-25T17:00:46Z | [
"defect"
] | DefaultHttp2RemoteFlowController not re-visiting nodes that still have streamable bytes | The allocation algorithm may not distributed all the available bytes even though the streams may be able to use all the bytes. This is caused by the children that have streamable bytes not being considered hungry because the `nextConnectionWindow` has collapsed.
Also related to [DefaultHttp2RemoteFlowController](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java#L377) not taking into account the streamable bytes of peers.
``` java
int connectionWindowChunk = max(1, (int) (connectionWindow * (child.weight() / (double) totalWeight)));
```
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index 3f7f3260835..b9243de2a4f 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -288,7 +288,12 @@ private int maxUsableChannelBytes() {
return min(connectionState().windowSize(), useableBytes);
}
- private int writableBytes(int requestedBytes) {
+ /**
+ * Package private for testing purposes only!
+ * @param requestedBytes The desired amount of bytes.
+ * @return The amount of bytes that can be supported by underlying {@link Channel} without queuing "too-much".
+ */
+ final int writableBytes(int requestedBytes) {
return Math.min(requestedBytes, maxUsableChannelBytes());
}
@@ -386,15 +391,6 @@ public boolean visit(Http2Stream child) throws Http2Exception {
bytesAllocated += bytesForChild;
nextConnectionWindow -= bytesForChild;
bytesForTree -= bytesForChild;
-
- // If this subtree still wants to send then re-insert into children list and re-consider for next
- // iteration. This is needed because we don't yet know if all the peers will be able to use
- // all of their "fair share" of the connection window, and if they don't use it then we should
- // divide their unused shared up for the peers who still want to send.
- if (nextConnectionWindow > 0 && state.streamableBytesForTree() > 0) {
- stillHungry(child);
- nextTotalWeight += child.weight();
- }
}
// Allocate any remaining bytes to the children of this stream.
@@ -404,7 +400,18 @@ public boolean visit(Http2Stream child) throws Http2Exception {
nextConnectionWindow -= childBytesAllocated;
}
- return nextConnectionWindow > 0;
+ if (nextConnectionWindow > 0) {
+ // If this subtree still wants to send then it should be re-considered to take bytes that are unused by
+ // sibling nodes. This is needed because we don't yet know if all the peers will be able to use all of
+ // their "fair share" of the connection window, and if they don't use it then we should divide their
+ // unused shared up for the peers who still want to send.
+ if (state.streamableBytesForTree() > 0) {
+ stillHungry(child);
+ }
+ return true;
+ }
+
+ return false;
}
void feedHungryChildren() throws Http2Exception {
@@ -438,15 +445,16 @@ void feedHungryChildren() throws Http2Exception {
* Indicates that the given child is still hungry (i.e. still has streamable bytes that can
* fit within the current connection window).
*/
- void stillHungry(Http2Stream child) {
+ private void stillHungry(Http2Stream child) {
ensureSpaceIsAllocated(nextTail);
stillHungry[nextTail++] = child;
+ nextTotalWeight += child.weight();
}
/**
* Ensures that the {@link #stillHungry} array is properly sized to hold the given index.
*/
- void ensureSpaceIsAllocated(int index) {
+ private void ensureSpaceIsAllocated(int index) {
if (stillHungry == null) {
// Initial size is 1/4 the number of children. Clipping the minimum at 2, which will over allocate if
// maxSize == 1 but if this was true we shouldn't need to re-allocate because the 1 child should get
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
index 963c43f9e8f..95dd831fd3e 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
@@ -755,6 +755,56 @@ public void reprioritizeShouldAdjustOutboundFlow() throws Http2Exception {
verify(listener, times(1)).streamWritten(stream(STREAM_D), 5);
}
+ /**
+ * Test that the maximum allowed amount the flow controller allows to be sent is always fully allocated if
+ * the streams have at least this much data to send. See https://github.com/netty/netty/issues/4266.
+ * <pre>
+ * 0
+ * / | \
+ * / | \
+ * A(0) B(0) C(0)
+ * /
+ * D(> allowed to send in 1 allocation attempt)
+ * </pre>
+ */
+ @Test
+ public void unstreamableParentsShouldFeedHungryChildren() throws Http2Exception {
+ // Max all connection windows. We don't want this being a limiting factor in the test.
+ maxStreamWindow(CONNECTION_STREAM_ID);
+ maxStreamWindow(STREAM_A);
+ maxStreamWindow(STREAM_B);
+ maxStreamWindow(STREAM_C);
+ maxStreamWindow(STREAM_D);
+
+ // Setup the priority tree.
+ setPriority(STREAM_A, 0, (short) 32, false);
+ setPriority(STREAM_B, 0, (short) 16, false);
+ setPriority(STREAM_C, 0, (short) 16, false);
+ setPriority(STREAM_D, STREAM_A, (short) 16, false);
+
+ // The bytesBeforeUnwritable defaults to Long.MAX_VALUE, we need to leave room to send enough data to exceed
+ // the writableBytes, and so we must reduce this value to something no-zero.
+ when(channel.bytesBeforeUnwritable()).thenReturn(1L);
+
+ // Calculate the max amount of data the flow controller will allow to be sent now.
+ final int writableBytes = controller.writableBytes(window(CONNECTION_STREAM_ID));
+
+ // This is insider knowledge into how writePendingBytes works. Because the algorithm will keep looping while
+ // the channel is writable, we simulate that the channel will become unwritable after the first write.
+ when(channel.isWritable()).thenReturn(false);
+
+ // Send enough so it can not be completely written out
+ final int expectedUnsentAmount = 1;
+ // Make sure we don't overflow
+ assertTrue(Integer.MAX_VALUE - expectedUnsentAmount > writableBytes);
+ FakeFlowControlled dataD = new FakeFlowControlled(writableBytes + expectedUnsentAmount);
+ sendData(STREAM_D, dataD);
+ controller.writePendingBytes();
+
+ dataD.assertPartiallyWritten(writableBytes);
+ verify(listener, times(1)).streamWritten(eq(stream(STREAM_D)), eq(writableBytes));
+ }
+
/**
* In this test, we root all streams at the connection, and then verify that data is split appropriately based on
* weight (all available data is the same).
@@ -1431,6 +1481,10 @@ private void exhaustStreamWindow(int streamId) throws Http2Exception {
incrementWindowSize(streamId, -window(streamId));
}
+ private void maxStreamWindow(int streamId) throws Http2Exception {
+ incrementWindowSize(streamId, Http2CodecUtil.MAX_INITIAL_WINDOW_SIZE - window(streamId));
+ }
+
private int window(int streamId) throws Http2Exception {
return controller.windowSize(stream(streamId));
}
| train | train | 2015-09-25T20:05:14 | 2015-09-23T23:43:58Z | Scottmitch | val |
netty/netty/4289_4309 | netty/netty | netty/netty/4289 | netty/netty/4309 | [
"timestamp(timedelta=388.0, similarity=0.8750055050435257)"
] | 11e8163aa9d29074d5002da662c5284e82ecd0d4 | 426b6bb4a5e21c12300e455a062735d17f9e0294 | [
"@trustin @normanmaurer - FYI\n",
"Already working on it :)\n\n> Am 27.09.2015 um 07:42 schrieb Scott Mitchell notifications@github.com:\n> \n> @trustin @normanmaurer - FYI\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"Thanks @normanmaurer !\n",
"Thank you guys!\n"
] | [
"Does it make sense to keep this to be able to test against \"real\" servers during development? Enabled via some explicit command line flag?\n",
"not sure... I think it is fine to just use the mock.\n",
"Does it make sense to have some variation in the responses (and number of responses)?\n",
"sgtm. lets kee... | 2015-10-02T19:11:02Z | [
"improvement"
] | Mock DNS server for automated unit tests | The DNS unit tests frequently fail because they are relying on external DNS servers. We should work to make them more reliable and ideally decouple their reliance on external DNS servers.
| [
"pom.xml",
"resolver-dns/pom.xml",
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java",
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java"
] | [
"pom.xml",
"resolver-dns/pom.xml",
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java",
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java"
] | [
"resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java",
"resolver-dns/src/test/resources/logback-test.xml"
] | diff --git a/pom.xml b/pom.xml
index 88a45d0ef24..689a9d5edd1 100644
--- a/pom.xml
+++ b/pom.xml
@@ -893,6 +893,14 @@
<artifactId>xz</artifactId>
<version>1.5</version>
</dependency>
+
+ <!-- Test dependency for resolver-dns -->
+ <dependency>
+ <groupId>org.apache.directory.server</groupId>
+ <artifactId>apacheds-protocol-dns</artifactId>
+ <version>1.5.7</version>
+ <scope>test</scope>
+ </dependency>
</dependencies>
</dependencyManagement>
diff --git a/resolver-dns/pom.xml b/resolver-dns/pom.xml
index ac130989795..905f5c2c1ad 100644
--- a/resolver-dns/pom.xml
+++ b/resolver-dns/pom.xml
@@ -44,6 +44,11 @@
<artifactId>netty-transport</artifactId>
<version>${project.version}</version>
</dependency>
+ <dependency>
+ <groupId>org.apache.directory.server</groupId>
+ <artifactId>apacheds-protocol-dns</artifactId>
+ <scope>test</scope>
+ </dependency>
</dependencies>
</project>
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java
index 49ce9663bad..9c3b7a1d2bc 100644
--- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java
+++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java
@@ -129,6 +129,7 @@ protected DnsServerAddressStream initialValue() throws Exception {
private volatile boolean recursionDesired = true;
private volatile int maxPayloadSize;
+ private volatile boolean optResourceEnabled = true;
/**
* Creates a new DNS-based name resolver that communicates with the specified list of DNS servers.
@@ -520,6 +521,24 @@ public DnsNameResolver setMaxPayloadSize(int maxPayloadSize) {
return this;
}
+ /**
+ * Enable the automatic inclusion of a optional records that tries to give the remote DNS server a hint about how
+ * much data the resolver can read per response. Some DNSServer may not support this and so fail to answer
+ * queries. If you find problems you may want to disable this.
+ */
+ public DnsNameResolver setOptResourceEnabled(boolean optResourceEnabled) {
+ this.optResourceEnabled = optResourceEnabled;
+ return this;
+ }
+
+ /**
+ * Returns the automatic inclusion of a optional records that tries to give the remote DNS server a hint about how
+ * much data the resolver can read per response is enabled.
+ */
+ public boolean isOptResourceEnabled() {
+ return optResourceEnabled;
+ }
+
/**
* Clears all the resolved addresses cached by this resolver.
*
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java
index f7ab7696ffc..1d1d96c35c1 100644
--- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java
+++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java
@@ -63,8 +63,12 @@ final class DnsQueryContext {
id = allocateId();
recursionDesired = parent.isRecursionDesired();
- optResource = new DefaultDnsRawRecord(
- StringUtil.EMPTY_STRING, DnsRecordType.OPT, parent.maxPayloadSize(), 0, Unpooled.EMPTY_BUFFER);
+ if (parent.isOptResourceEnabled()) {
+ optResource = new DefaultDnsRawRecord(
+ StringUtil.EMPTY_STRING, DnsRecordType.OPT, parent.maxPayloadSize(), 0, Unpooled.EMPTY_BUFFER);
+ } else {
+ optResource = null;
+ }
}
private int allocateId() {
@@ -89,7 +93,9 @@ void query() {
final DatagramDnsQuery query = new DatagramDnsQuery(null, nameServerAddr, id);
query.setRecursionDesired(recursionDesired);
query.setRecord(DnsSection.QUESTION, question);
- query.setRecord(DnsSection.ADDITIONAL, optResource);
+ if (optResource != null) {
+ query.setRecord(DnsSection.ADDITIONAL, optResource);
+ }
if (logger.isDebugEnabled()) {
logger.debug("{} WRITE: [{}: {}], {}", parent.ch, id, nameServerAddr, question);
| diff --git a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java
index 8967150cb1d..a1ca634386e 100644
--- a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java
+++ b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java
@@ -28,15 +28,42 @@
import io.netty.handler.codec.dns.DnsResponse;
import io.netty.handler.codec.dns.DnsResponseCode;
import io.netty.handler.codec.dns.DnsSection;
+import io.netty.util.NetUtil;
import io.netty.util.concurrent.Future;
import io.netty.util.internal.StringUtil;
import io.netty.util.internal.ThreadLocalRandom;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
+import org.apache.directory.server.dns.DnsServer;
+import org.apache.directory.server.dns.io.encoder.DnsMessageEncoder;
+import org.apache.directory.server.dns.io.encoder.ResourceRecordEncoder;
+import org.apache.directory.server.dns.messages.DnsMessage;
+import org.apache.directory.server.dns.messages.QuestionRecord;
+import org.apache.directory.server.dns.messages.RecordClass;
+import org.apache.directory.server.dns.messages.RecordType;
+import org.apache.directory.server.dns.messages.ResourceRecord;
+import org.apache.directory.server.dns.messages.ResourceRecordModifier;
+import org.apache.directory.server.dns.protocol.DnsProtocolHandler;
+import org.apache.directory.server.dns.protocol.DnsUdpDecoder;
+import org.apache.directory.server.dns.protocol.DnsUdpEncoder;
+import org.apache.directory.server.dns.store.DnsAttribute;
+import org.apache.directory.server.dns.store.RecordStore;
+import org.apache.directory.server.protocol.shared.transport.UdpTransport;
+import org.apache.mina.core.buffer.IoBuffer;
+import org.apache.mina.core.session.IoSession;
+import org.apache.mina.filter.codec.ProtocolCodecFactory;
+import org.apache.mina.filter.codec.ProtocolCodecFilter;
+import org.apache.mina.filter.codec.ProtocolDecoder;
+import org.apache.mina.filter.codec.ProtocolEncoder;
+import org.apache.mina.filter.codec.ProtocolEncoderOutput;
+import org.apache.mina.transport.socket.DatagramAcceptor;
+import org.apache.mina.transport.socket.DatagramSessionConfig;
import org.junit.After;
import org.junit.AfterClass;
+import org.junit.BeforeClass;
import org.junit.Test;
+import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.UnknownHostException;
@@ -63,86 +90,12 @@ public class DnsNameResolverTest {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(DnsNameResolver.class);
- private static final List<InetSocketAddress> SERVERS = Arrays.asList(
- new InetSocketAddress("8.8.8.8", 53), // Google Public DNS
- new InetSocketAddress("8.8.4.4", 53),
- new InetSocketAddress("208.67.222.222", 53), // OpenDNS
- new InetSocketAddress("208.67.220.220", 53),
- new InetSocketAddress("208.67.222.220", 53),
- new InetSocketAddress("208.67.220.222", 53),
- new InetSocketAddress("37.235.1.174", 53), // FreeDNS
- new InetSocketAddress("37.235.1.177", 53),
- //
- // OpenNIC - Fusl's Tier 2 DNS servers
- //
- // curl http://meo.ws/dnsrec.php | \
- // perl -p0 -e 's#(^(.|\r|\n)*<textarea[^>]*>|</textarea>(.|\r|\n)*)##g' | \
- // awk -F ',' '{ print $14 }' | \
- // grep -E '^[0-9]+.[0-9]+.[0-9]+.[0-9]+$' | \
- // perl -p -e 's/^/new InetSocketAddress("/' | \
- // perl -p -e 's/$/", 53),/'
- //
- new InetSocketAddress("79.133.43.124", 53),
- new InetSocketAddress("151.236.10.36", 53),
- new InetSocketAddress("163.47.20.30", 53),
- new InetSocketAddress("103.25.56.238", 53),
- new InetSocketAddress("111.223.227.125", 53),
- new InetSocketAddress("103.241.0.207", 53),
- new InetSocketAddress("192.71.249.83", 53),
- new InetSocketAddress("69.28.67.83", 53),
- new InetSocketAddress("192.121.170.22", 53),
- new InetSocketAddress("62.141.38.230", 53),
- new InetSocketAddress("185.97.7.7", 53),
- new InetSocketAddress("84.200.83.161", 53),
- new InetSocketAddress("78.47.34.12", 53),
- new InetSocketAddress("41.215.240.141", 53),
- new InetSocketAddress("5.134.117.239", 53),
- new InetSocketAddress("95.175.99.231", 53),
- new InetSocketAddress("92.222.80.28", 53),
- new InetSocketAddress("178.79.174.162", 53),
- new InetSocketAddress("95.129.41.126", 53),
- new InetSocketAddress("103.53.199.71", 53),
- new InetSocketAddress("176.62.0.26", 53),
- new InetSocketAddress("185.112.156.159", 53),
- new InetSocketAddress("217.78.6.191", 53),
- new InetSocketAddress("193.182.144.83", 53),
- new InetSocketAddress("37.235.55.46", 53),
- new InetSocketAddress("103.250.184.85", 53),
- new InetSocketAddress("151.236.24.245", 53),
- new InetSocketAddress("192.121.47.47", 53),
- new InetSocketAddress("106.185.41.36", 53),
- new InetSocketAddress("88.82.109.119", 53),
- new InetSocketAddress("212.117.180.145", 53),
- new InetSocketAddress("185.61.149.228", 53),
- new InetSocketAddress("93.158.205.94", 53),
- new InetSocketAddress("31.220.43.191", 53),
- new InetSocketAddress("91.247.228.155", 53),
- new InetSocketAddress("163.47.21.44", 53),
- new InetSocketAddress("94.46.12.224", 53),
- new InetSocketAddress("46.108.39.139", 53),
- new InetSocketAddress("94.242.57.130", 53),
- new InetSocketAddress("46.151.215.199", 53),
- new InetSocketAddress("31.220.5.106", 53),
- new InetSocketAddress("103.25.202.192", 53),
- new InetSocketAddress("185.65.206.121", 53),
- new InetSocketAddress("91.229.79.104", 53),
- new InetSocketAddress("74.207.241.202", 53),
- new InetSocketAddress("104.245.33.185", 53),
- new InetSocketAddress("104.245.39.112", 53),
- new InetSocketAddress("74.207.232.103", 53),
- new InetSocketAddress("104.237.144.172", 53),
- new InetSocketAddress("104.237.136.225", 53),
- new InetSocketAddress("104.219.55.89", 53),
- new InetSocketAddress("23.226.230.72", 53),
- new InetSocketAddress("41.185.78.25", 53)
- );
-
// Using the top-100 web sites ranked in Alexa.com (Oct 2014)
// Please use the following series of shell commands to get this up-to-date:
// $ curl -O http://s3.amazonaws.com/alexa-static/top-1m.csv.zip
// $ unzip -o top-1m.csv.zip top-1m.csv
// $ head -100 top-1m.csv | cut -d, -f2 | cut -d/ -f1 | while read L; do echo '"'"$L"'",'; done > topsites.txt
- private static final String[] DOMAINS = {
+ private static final Set<String> DOMAINS = Collections.unmodifiableSet(new HashSet<String>(Arrays.asList(
"google.com",
"facebook.com",
"youtube.com",
@@ -241,8 +194,7 @@ public class DnsNameResolverTest {
"cnet.com",
"vimeo.com",
"redtube.com",
- "blogspot.in",
- };
+ "blogspot.in")));
/**
* The list of the domain names to exclude from {@link #testResolveAorAAAA()}.
@@ -263,7 +215,7 @@ public class DnsNameResolverTest {
private static final Set<String> EXCLUSIONS_RESOLVE_AAAA = new HashSet<String>();
static {
EXCLUSIONS_RESOLVE_AAAA.addAll(EXCLUSIONS_RESOLVE_A);
- Collections.addAll(EXCLUSIONS_RESOLVE_AAAA, DOMAINS);
+ EXCLUSIONS_RESOLVE_AAAA.addAll(DOMAINS);
EXCLUSIONS_RESOLVE_AAAA.removeAll(Arrays.asList(
"google.com",
"facebook.com",
@@ -311,16 +263,21 @@ public class DnsNameResolverTest {
StringUtil.EMPTY_STRING);
}
+ private static final TestDnsServer dnsServer = new TestDnsServer();
private static final EventLoopGroup group = new NioEventLoopGroup(1);
- private static final DnsNameResolver resolver = new DnsNameResolver(
- group.next(), NioDatagramChannel.class, DnsServerAddresses.shuffled(SERVERS));
-
- static {
- resolver.setMaxQueriesPerResolve(SERVERS.size());
+ private static DnsNameResolver resolver;
+
+ @BeforeClass
+ public static void init() throws Exception {
+ dnsServer.start();
+ resolver = new DnsNameResolver(group.next(), NioDatagramChannel.class,
+ DnsServerAddresses.singleton(dnsServer.localAddress()));
+ resolver.setMaxQueriesPerResolve(1);
+ resolver.setOptResourceEnabled(false);
}
-
@AfterClass
public static void destroy() {
+ dnsServer.stop();
group.shutdownGracefully();
}
@@ -340,8 +297,7 @@ public void testResolveAAAAorA() throws Exception {
}
@Test
- public void testResolveA() throws Exception {
-
+ public void testResolveA() throws Exception {
final int oldMinTtl = resolver.minTtl();
final int oldMaxTtl = resolver.maxTtl();
@@ -448,15 +404,6 @@ public void testQueryMx() throws Exception {
for (Entry<String, Future<AddressedEnvelope<DnsResponse, InetSocketAddress>>> e: futures.entrySet()) {
String hostname = e.getKey();
Future<AddressedEnvelope<DnsResponse, InetSocketAddress>> f = e.getValue().awaitUninterruptibly();
- if (!f.isSuccess()) {
- // Try again because the DNS servers might be throttling us down.
- for (int i = 0; i < SERVERS.size(); i++) {
- f = queryMx(hostname).awaitUninterruptibly();
- if (f.isSuccess() && !DnsResponseCode.SERVFAIL.equals(f.getNow().content().code())) {
- break;
- }
- }
- }
DnsResponse response = f.getNow().content();
assertThat(response.code(), is(DnsResponseCode.NOERROR));
@@ -563,7 +510,174 @@ private static void queryMx(
futures.put(hostname, resolver.query(new DefaultDnsQuestion(hostname, DnsRecordType.MX)));
}
- private static Future<AddressedEnvelope<DnsResponse, InetSocketAddress>> queryMx(String hostname) throws Exception {
- return resolver.query(new DefaultDnsQuestion(hostname, DnsRecordType.MX));
+ private static final class TestDnsServer extends DnsServer {
+ private static final Map<String, byte[]> BYTES = new HashMap<String, byte[]>();
+ private static final String[] IPV6_ADDRESSES;
+ static {
+ BYTES.put("::1", new byte[] {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1});
+ BYTES.put("0:0:0:0:0:0:1:1", new byte[] {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1});
+ BYTES.put("0:0:0:0:0:1:1:1", new byte[] {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1});
+ BYTES.put("0:0:0:0:1:1:1:1", new byte[] {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1});
+ BYTES.put("0:0:0:1:1:1:1:1", new byte[] {0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1});
+ BYTES.put("0:0:1:1:1:1:1:1", new byte[] {0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1});
+ BYTES.put("0:1:1:1:1:1:1:1", new byte[] {0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1});
+ BYTES.put("1:1:1:1:1:1:1:1", new byte[] {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1});
+
+ IPV6_ADDRESSES = BYTES.keySet().toArray(new String[BYTES.size()]);
+ }
+
+ @Override
+ public void start() throws IOException {
+ InetSocketAddress address = new InetSocketAddress(NetUtil.LOCALHOST4, 0);
+ UdpTransport transport = new UdpTransport(address.getHostName(), address.getPort());
+ setTransports(transport);
+
+ DatagramAcceptor acceptor = transport.getAcceptor();
+
+ acceptor.setHandler(new DnsProtocolHandler(this, new TestRecordStore()) {
+ @Override
+ public void sessionCreated(IoSession session) throws Exception {
+ // USe our own codec to support AAAA testing
+ session.getFilterChain()
+ .addFirst("codec", new ProtocolCodecFilter(new TestDnsProtocolUdpCodecFactory()));
+ }
+ });
+
+ ((DatagramSessionConfig) acceptor.getSessionConfig()).setReuseAddress(true);
+
+ // Start the listener
+ acceptor.bind();
+ }
+
+ public InetSocketAddress localAddress() {
+ return (InetSocketAddress) getTransports()[0].getAcceptor().getLocalAddress();
+ }
+
+ /**
+ * {@link ProtocolCodecFactory} which allows to test AAAA resolution.
+ */
+ private static final class TestDnsProtocolUdpCodecFactory implements ProtocolCodecFactory {
+ private final DnsMessageEncoder encoder = new DnsMessageEncoder();
+ private final TestAAAARecordEncoder recordEncoder = new TestAAAARecordEncoder();
+
+ @Override
+ public ProtocolEncoder getEncoder(IoSession session) throws Exception {
+ return new DnsUdpEncoder() {
+
+ @Override
+ public void encode(IoSession session, Object message, ProtocolEncoderOutput out) {
+ IoBuffer buf = IoBuffer.allocate(1024);
+ DnsMessage dnsMessage = (DnsMessage) message;
+ encoder.encode(buf, dnsMessage);
+ for (ResourceRecord record: dnsMessage.getAnswerRecords()) {
+ // This is a hack to allow to also test for AAAA resolution as DnsMessageEncoder
+ // does not support it and it is hard to extend, because the interesting methods
+ // are private...
+ // In case of RecordType.AAAA we need to encode the RecordType by ourself.
+ if (record.getRecordType() == RecordType.AAAA) {
+ try {
+ recordEncoder.put(buf, record);
+ } catch (IOException e) {
+ // Should never happen
+ throw new IllegalStateException(e);
+ }
+ }
+ }
+ buf.flip();
+
+ out.write(buf);
+ }
+ };
+ }
+
+ @Override
+ public ProtocolDecoder getDecoder(IoSession session) throws Exception {
+ return new DnsUdpDecoder();
+ }
+
+ private static final class TestAAAARecordEncoder extends ResourceRecordEncoder {
+
+ @Override
+ protected void putResourceRecordData(IoBuffer ioBuffer, ResourceRecord resourceRecord) {
+ byte[] bytes = BYTES.get(resourceRecord.get(DnsAttribute.IP_ADDRESS));
+ if (bytes == null) {
+ throw new IllegalStateException();
+ }
+ // encode the ::1
+ ioBuffer.put(bytes);
+ }
+ }
+ }
+
+ private static final class TestRecordStore implements RecordStore {
+ private static final int[] NUMBERS = new int[254];
+ private static final char[] CHARS = new char[26];
+
+ static {
+ for (int i = 0; i < NUMBERS.length; i++) {
+ NUMBERS[i] = i + 1;
+ }
+
+ for (int i = 0; i < CHARS.length; i++) {
+ CHARS[i] = (char) ('a' + i);
+ }
+ }
+
+ private static int index(int arrayLength) {
+ return Math.abs(ThreadLocalRandom.current().nextInt()) % arrayLength;
+ }
+
+ private static String nextDomain() {
+ return CHARS[index(CHARS.length)] + ".netty.io";
+ }
+
+ private static String nextIp() {
+ return ippart() + "." + ippart() + '.' + ippart() + '.' + ippart();
+ }
+
+ private static int ippart() {
+ return NUMBERS[index(NUMBERS.length)];
+ }
+
+ private static String nextIp6() {
+ return IPV6_ADDRESSES[index(IPV6_ADDRESSES.length)];
+ }
+
+ @Override
+ public Set<ResourceRecord> getRecords(QuestionRecord questionRecord) {
+ String name = questionRecord.getDomainName();
+ if (DOMAINS.contains(name)) {
+ ResourceRecordModifier rm = new ResourceRecordModifier();
+ rm.setDnsClass(RecordClass.IN);
+ rm.setDnsName(name);
+ rm.setDnsTtl(100);
+ rm.setDnsType(questionRecord.getRecordType());
+
+ switch (questionRecord.getRecordType()) {
+ case A:
+ do {
+ rm.put(DnsAttribute.IP_ADDRESS, nextIp());
+ } while (ThreadLocalRandom.current().nextBoolean());
+ break;
+ case AAAA:
+ do {
+ rm.put(DnsAttribute.IP_ADDRESS, nextIp6());
+ } while (ThreadLocalRandom.current().nextBoolean());
+ break;
+ case MX:
+ int prioritity = 0;
+ do {
+ rm.put(DnsAttribute.DOMAIN_NAME, nextDomain());
+ rm.put(DnsAttribute.MX_PREFERENCE, String.valueOf(++prioritity));
+ } while (ThreadLocalRandom.current().nextBoolean());
+ break;
+ default:
+ return null;
+ }
+ return Collections.singleton(rm.getEntry());
+ }
+ return null;
+ }
+ }
}
}
diff --git a/resolver-dns/src/test/resources/logback-test.xml b/resolver-dns/src/test/resources/logback-test.xml
new file mode 100644
index 00000000000..86ce779632c
--- /dev/null
+++ b/resolver-dns/src/test/resources/logback-test.xml
@@ -0,0 +1,33 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ ~ Copyright 2015 The Netty Project
+ ~
+ ~ The Netty Project licenses this file to you under the Apache License,
+ ~ version 2.0 (the "License"); you may not use this file except in compliance
+ ~ with the License. You may obtain a copy of the License at:
+ ~
+ ~ http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing, software
+ ~ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ ~ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ ~ License for the specific language governing permissions and limitations
+ ~ under the License.
+ -->
+<configuration>
+
+ <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
+ <!-- encoders are assigned the type
+ ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
+ <encoder>
+ <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
+ </encoder>
+ </appender>
+
+ <root level="info">
+ <appender-ref ref="STDOUT" />
+ </root>
+
+ // Disable logging for apacheds to reduce noise.
+ <logger name="org.apache.directory" level="off"/>
+</configuration>
| train | train | 2015-10-07T14:15:53 | 2015-09-27T05:42:02Z | Scottmitch | val |
netty/netty/4327_4341 | netty/netty | netty/netty/4327 | netty/netty/4341 | [
"timestamp(timedelta=14.0, similarity=0.861430330228934)"
] | 81a913ced197c2a1c3219ab602c49ded9ec195e7 | 6530ee5a4199dbbb180153fab396d746a76010d0 | [
"@jroper sounds like a bug... let me fix it.\n",
"Fixed by https://github.com/netty/netty/pull/4341\n"
] | [] | 2015-10-10T05:25:28Z | [
"defect"
] | A number of toString() methods on classes that implement ByteBufHolder can throw IllegalReferenceCountException | Two examples that I've found include `WebSocketFrame.toString()` and `DefaultByteBufHolder.toString()`. `toString()` implementations should always try to avoid throwing exceptions, since they're used for things like creating exception messages and arbitrary logging of inputs/outputs. Consequently, if `toString()` does throw an exception, this will often hide another real problem, a typical example would be:
``` java
TextWebSocketFrame frame = ...
try {
String text = frame.text();
ReferenceCountUtil.release(frame);
doSomeProcessing(text);
} catch (Exception e) {
log.error("Error processing frame: " + frame, e);
}
```
In this example, if `doSomeProcessing` throws an exception, the exception will be caught, but not logged, instead, when `frame.toString()` is invoked, that will throw an `IllegalReferenceCountException`.
I don't think there's any need to guard access to `ByteBuf.toString()`, since it likewise shouldn't throw an exception for the same reasons as above, so `ByteBufHolder` `toString()` implementations should just invoke that directly without going through any reference count checks.
| [
"buffer/src/main/java/io/netty/buffer/DefaultByteBufHolder.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketFrame.java",
"transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java"
] | [
"buffer/src/main/java/io/netty/buffer/DefaultByteBufHolder.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketFrame.java",
"transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java"
] | [
"buffer/src/test/java/io/netty/buffer/DefaultByteBufHolderTest.java"
] | diff --git a/buffer/src/main/java/io/netty/buffer/DefaultByteBufHolder.java b/buffer/src/main/java/io/netty/buffer/DefaultByteBufHolder.java
index c029accedf6..ef39045dcdd 100644
--- a/buffer/src/main/java/io/netty/buffer/DefaultByteBufHolder.java
+++ b/buffer/src/main/java/io/netty/buffer/DefaultByteBufHolder.java
@@ -90,8 +90,16 @@ public boolean release(int decrement) {
return data.release(decrement);
}
+ /**
+ * Return {@link ByteBuf#toString()} without checking the reference count first. This is useful to implemement
+ * {@link #toString()}.
+ */
+ protected final String contentToString() {
+ return data.toString();
+ }
+
@Override
public String toString() {
- return StringUtil.simpleClassName(this) + '(' + content().toString() + ')';
+ return StringUtil.simpleClassName(this) + '(' + contentToString() + ')';
}
}
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketFrame.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketFrame.java
index c46eec82bdd..37606ab535f 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketFrame.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketFrame.java
@@ -68,7 +68,7 @@ public int rsv() {
@Override
public String toString() {
- return StringUtil.simpleClassName(this) + "(data: " + content() + ')';
+ return StringUtil.simpleClassName(this) + "(data: " + contentToString() + ')';
}
@Override
diff --git a/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java b/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java
index d602ee50e14..f809999d5d9 100644
--- a/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java
+++ b/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java
@@ -17,7 +17,6 @@
import com.sun.nio.sctp.MessageInfo;
import io.netty.buffer.ByteBuf;
-import io.netty.buffer.ByteBufUtil;
import io.netty.buffer.DefaultByteBufHolder;
/**
@@ -191,15 +190,9 @@ public SctpMessage touch(Object hint) {
@Override
public String toString() {
- if (refCnt() == 0) {
- return "SctpFrame{" +
- "streamIdentifier=" + streamIdentifier + ", protocolIdentifier=" + protocolIdentifier +
- ", unordered=" + unordered +
- ", data=(FREED)}";
- }
return "SctpFrame{" +
- "streamIdentifier=" + streamIdentifier + ", protocolIdentifier=" + protocolIdentifier +
- ", unordered=" + unordered +
- ", data=" + ByteBufUtil.hexDump(content()) + '}';
+ "streamIdentifier=" + streamIdentifier + ", protocolIdentifier=" + protocolIdentifier +
+ ", unordered=" + unordered +
+ ", data=" + contentToString() + '}';
}
}
| diff --git a/buffer/src/test/java/io/netty/buffer/DefaultByteBufHolderTest.java b/buffer/src/test/java/io/netty/buffer/DefaultByteBufHolderTest.java
new file mode 100644
index 00000000000..0b462c4f541
--- /dev/null
+++ b/buffer/src/test/java/io/netty/buffer/DefaultByteBufHolderTest.java
@@ -0,0 +1,32 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.buffer;
+
+import org.junit.Test;
+
+import static org.junit.Assert.*;
+
+public class DefaultByteBufHolderTest {
+
+ @Test
+ public void testToString() {
+ ByteBufHolder holder = new DefaultByteBufHolder(Unpooled.buffer());
+ assertEquals(1, holder.refCnt());
+ assertNotNull(holder.toString());
+ assertTrue(holder.release());
+ assertNotNull(holder.toString());
+ }
+}
| test | train | 2015-10-09T19:43:12 | 2015-10-07T12:25:15Z | jroper | val |
netty/netty/4313_4345 | netty/netty | netty/netty/4313 | netty/netty/4345 | [
"timestamp(timedelta=31.0, similarity=0.9040483003970246)"
] | 99dfc9ea799348430a1c25776ce30a95bc10a1ff | 4b5959472213c36d3590ac02ceb1288689bc19cc | [
"Will check\n\n> Am 03.10.2015 um 13:26 schrieb scf37 notifications@github.com:\n> \n> There is interesting comment:\n> \n> // Maybe we could also check if we can unwrap() to access the wrapped buffer which\n> // may be an AbstractByteBuf. But this may be overkill so let us keep it simple for now.\n> But - simple H... | [] | 2015-10-10T05:38:48Z | [
"improvement"
] | ByteBufUtil.writeUtf8 and WrappedByteBuf | There is interesting comment:
```
// Maybe we could also check if we can unwrap() to access the wrapped buffer which
// may be an AbstractByteBuf. But this may be overkill so let us keep it simple for now.
```
But - simple HTTP server under load (like 5k req/sec) uses 38% time on String.getBytes. I use ByteBufUtil.writeUtf8 to write HTTP response (about 10Kb).
Please consider adding that check. Additional instanceof is cheap those days and do matter in some cases.
Well, I can try ByteBuf.unwrap() to get to AbstractByteBuf but is that safe?
Netty 4.0.32.Final, ResourceLeakDetector is on SIMPLE.
| [
"buffer/src/main/java/io/netty/buffer/ByteBufUtil.java",
"buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java"
] | [
"buffer/src/main/java/io/netty/buffer/ByteBufUtil.java",
"buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java"
] | [
"buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java"
] | diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
index 7dc375fbb47..08c747dfcf3 100644
--- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
+++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
@@ -395,39 +395,46 @@ public static int writeUtf8(ByteBuf buf, CharSequence seq) {
final int len = seq.length();
final int maxSize = len * 3;
buf.ensureWritable(maxSize);
- if (buf instanceof AbstractByteBuf) {
- // Fast-Path
- AbstractByteBuf buffer = (AbstractByteBuf) buf;
- int oldWriterIndex = buffer.writerIndex;
- int writerIndex = oldWriterIndex;
-
- // We can use the _set methods as these not need to do any index checks and reference checks.
- // This is possible as we called ensureWritable(...) before.
- for (int i = 0; i < len; i++) {
- char c = seq.charAt(i);
- if (c < 0x80) {
- buffer._setByte(writerIndex++, (byte) c);
- } else if (c < 0x800) {
- buffer._setByte(writerIndex++, (byte) (0xc0 | (c >> 6)));
- buffer._setByte(writerIndex++, (byte) (0x80 | (c & 0x3f)));
- } else {
- buffer._setByte(writerIndex++, (byte) (0xe0 | (c >> 12)));
- buffer._setByte(writerIndex++, (byte) (0x80 | ((c >> 6) & 0x3f)));
- buffer._setByte(writerIndex++, (byte) (0x80 | (c & 0x3f)));
- }
+
+ for (;;) {
+ if (buf instanceof AbstractByteBuf) {
+ return writeUtf8((AbstractByteBuf) buf, seq, len);
+ } else if (buf instanceof WrappedByteBuf) {
+ // Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path.
+ buf = buf.unwrap();
+ } else {
+ byte[] bytes = seq.toString().getBytes(CharsetUtil.UTF_8);
+ buf.writeBytes(bytes);
+ return bytes.length;
}
- // update the writerIndex without any extra checks for performance reasons
- buffer.writerIndex = writerIndex;
- return writerIndex - oldWriterIndex;
- } else {
- // Maybe we could also check if we can unwrap() to access the wrapped buffer which
- // may be an AbstractByteBuf. But this may be overkill so let us keep it simple for now.
- byte[] bytes = seq.toString().getBytes(CharsetUtil.UTF_8);
- buf.writeBytes(bytes);
- return bytes.length;
}
}
+ // Fast-Path implementation
+ private static int writeUtf8(AbstractByteBuf buffer, CharSequence seq, int len) {
+ int oldWriterIndex = buffer.writerIndex;
+ int writerIndex = oldWriterIndex;
+
+ // We can use the _set methods as these not need to do any index checks and reference checks.
+ // This is possible as we called ensureWritable(...) before.
+ for (int i = 0; i < len; i++) {
+ char c = seq.charAt(i);
+ if (c < 0x80) {
+ buffer._setByte(writerIndex++, (byte) c);
+ } else if (c < 0x800) {
+ buffer._setByte(writerIndex++, (byte) (0xc0 | (c >> 6)));
+ buffer._setByte(writerIndex++, (byte) (0x80 | (c & 0x3f)));
+ } else {
+ buffer._setByte(writerIndex++, (byte) (0xe0 | (c >> 12)));
+ buffer._setByte(writerIndex++, (byte) (0x80 | ((c >> 6) & 0x3f)));
+ buffer._setByte(writerIndex++, (byte) (0x80 | (c & 0x3f)));
+ }
+ }
+ // update the writerIndex without any extra checks for performance reasons
+ buffer.writerIndex = writerIndex;
+ return writerIndex - oldWriterIndex;
+ }
+
/**
* Encode a {@link CharSequence} in <a href="http://en.wikipedia.org/wiki/ASCII">ASCII</a> and write it
* to a {@link ByteBuf}.
@@ -444,26 +451,33 @@ public static int writeAscii(ByteBuf buf, CharSequence seq) {
// ASCII uses 1 byte per char
final int len = seq.length();
buf.ensureWritable(len);
- if (buf instanceof AbstractByteBuf) {
- // Fast-Path
- AbstractByteBuf buffer = (AbstractByteBuf) buf;
- int writerIndex = buffer.writerIndex;
-
- // We can use the _set methods as these not need to do any index checks and reference checks.
- // This is possible as we called ensureWritable(...) before.
- for (int i = 0; i < len; i++) {
- buffer._setByte(writerIndex++, (byte) seq.charAt(i));
+ for (;;) {
+ if (buf instanceof AbstractByteBuf) {
+ writeAscii((AbstractByteBuf) buf, seq, len);
+ break;
+ } else if (buf instanceof WrappedByteBuf) {
+ // Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path.
+ buf = buf.unwrap();
+ } else {
+ buf.writeBytes(seq.toString().getBytes(CharsetUtil.US_ASCII));
}
- // update the writerIndex without any extra checks for performance reasons
- buffer.writerIndex = writerIndex;
- } else {
- // Maybe we could also check if we can unwrap() to access the wrapped buffer which
- // may be an AbstractByteBuf. But this may be overkill so let us keep it simple for now.
- buf.writeBytes(seq.toString().getBytes(CharsetUtil.US_ASCII));
}
return len;
}
+ // Fast-Path implementation
+ private static void writeAscii(AbstractByteBuf buffer, CharSequence seq, int len) {
+ int writerIndex = buffer.writerIndex;
+
+ // We can use the _set methods as these not need to do any index checks and reference checks.
+ // This is possible as we called ensureWritable(...) before.
+ for (int i = 0; i < len; i++) {
+ buffer._setByte(writerIndex++, (byte) seq.charAt(i));
+ }
+ // update the writerIndex without any extra checks for performance reasons
+ buffer.writerIndex = writerIndex;
+ }
+
/**
* Encode the given {@link CharBuffer} using the given {@link Charset} into a new {@link ByteBuf} which
* is allocated via the {@link ByteBufAllocator}.
diff --git a/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java b/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java
index e8eb093aa2e..de9f92cb195 100644
--- a/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java
@@ -27,6 +27,13 @@
import java.nio.channels.ScatteringByteChannel;
import java.nio.charset.Charset;
+/**
+ * Wraps another {@link ByteBuf}.
+ *
+ * It's important that the {@link #readerIndex()} and {@link #writerIndex()} will not do any adjustments on the
+ * indices on the fly because of internal optimizations made by {@link ByteBufUtil#writeAscii(ByteBuf, CharSequence)}
+ * and {@link ByteBufUtil#writeUtf8(ByteBuf, CharSequence)}.
+ */
class WrappedByteBuf extends ByteBuf {
protected final ByteBuf buf;
@@ -39,17 +46,17 @@ protected WrappedByteBuf(ByteBuf buf) {
}
@Override
- public boolean hasMemoryAddress() {
+ public final boolean hasMemoryAddress() {
return buf.hasMemoryAddress();
}
@Override
- public long memoryAddress() {
+ public final long memoryAddress() {
return buf.memoryAddress();
}
@Override
- public int capacity() {
+ public final int capacity() {
return buf.capacity();
}
@@ -60,17 +67,17 @@ public ByteBuf capacity(int newCapacity) {
}
@Override
- public int maxCapacity() {
+ public final int maxCapacity() {
return buf.maxCapacity();
}
@Override
- public ByteBufAllocator alloc() {
+ public final ByteBufAllocator alloc() {
return buf.alloc();
}
@Override
- public ByteOrder order() {
+ public final ByteOrder order() {
return buf.order();
}
@@ -80,33 +87,33 @@ public ByteBuf order(ByteOrder endianness) {
}
@Override
- public ByteBuf unwrap() {
+ public final ByteBuf unwrap() {
return buf;
}
@Override
- public boolean isDirect() {
+ public final boolean isDirect() {
return buf.isDirect();
}
@Override
- public int readerIndex() {
+ public final int readerIndex() {
return buf.readerIndex();
}
@Override
- public ByteBuf readerIndex(int readerIndex) {
+ public final ByteBuf readerIndex(int readerIndex) {
buf.readerIndex(readerIndex);
return this;
}
@Override
- public int writerIndex() {
+ public final int writerIndex() {
return buf.writerIndex();
}
@Override
- public ByteBuf writerIndex(int writerIndex) {
+ public final ByteBuf writerIndex(int writerIndex) {
buf.writerIndex(writerIndex);
return this;
}
@@ -118,56 +125,56 @@ public ByteBuf setIndex(int readerIndex, int writerIndex) {
}
@Override
- public int readableBytes() {
+ public final int readableBytes() {
return buf.readableBytes();
}
@Override
- public int writableBytes() {
+ public final int writableBytes() {
return buf.writableBytes();
}
@Override
- public int maxWritableBytes() {
+ public final int maxWritableBytes() {
return buf.maxWritableBytes();
}
@Override
- public boolean isReadable() {
+ public final boolean isReadable() {
return buf.isReadable();
}
@Override
- public boolean isWritable() {
+ public final boolean isWritable() {
return buf.isWritable();
}
@Override
- public ByteBuf clear() {
+ public final ByteBuf clear() {
buf.clear();
return this;
}
@Override
- public ByteBuf markReaderIndex() {
+ public final ByteBuf markReaderIndex() {
buf.markReaderIndex();
return this;
}
@Override
- public ByteBuf resetReaderIndex() {
+ public final ByteBuf resetReaderIndex() {
buf.resetReaderIndex();
return this;
}
@Override
- public ByteBuf markWriterIndex() {
+ public final ByteBuf markWriterIndex() {
buf.markWriterIndex();
return this;
}
@Override
- public ByteBuf resetWriterIndex() {
+ public final ByteBuf resetWriterIndex() {
buf.resetWriterIndex();
return this;
}
@@ -801,17 +808,17 @@ public ByteBuf retain() {
}
@Override
- public boolean isReadable(int size) {
+ public final boolean isReadable(int size) {
return buf.isReadable(size);
}
@Override
- public boolean isWritable(int size) {
+ public final boolean isWritable(int size) {
return buf.isWritable(size);
}
@Override
- public int refCnt() {
+ public final int refCnt() {
return buf.refCnt();
}
| diff --git a/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java b/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java
index fb04c78ce71..1725fb525ac 100644
--- a/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java
+++ b/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java
@@ -33,6 +33,19 @@ public void testWriteUsAscii() {
Assert.assertEquals(buf, buf2);
}
+ @Test
+ public void testWriteUsAsciiWrapped() {
+ String usAscii = "NettyRocks";
+ ByteBuf buf = Unpooled.unreleasableBuffer(ReferenceCountUtil.releaseLater(Unpooled.buffer(16)));
+ assertWrapped(buf);
+ buf.writeBytes(usAscii.getBytes(CharsetUtil.US_ASCII));
+ ByteBuf buf2 = Unpooled.unreleasableBuffer(ReferenceCountUtil.releaseLater(Unpooled.buffer(16)));
+ assertWrapped(buf2);
+ ByteBufUtil.writeAscii(buf2, usAscii);
+
+ Assert.assertEquals(buf, buf2);
+ }
+
@Test
public void testWriteUtf8() {
String usAscii = "Some UTF-8 like äÄ∏ŒŒ";
@@ -43,4 +56,21 @@ public void testWriteUtf8() {
Assert.assertEquals(buf, buf2);
}
+
+ @Test
+ public void testWriteUtf8Wrapped() {
+ String usAscii = "Some UTF-8 like äÄ∏ŒŒ";
+ ByteBuf buf = Unpooled.unreleasableBuffer(ReferenceCountUtil.releaseLater(Unpooled.buffer(16)));
+ assertWrapped(buf);
+ buf.writeBytes(usAscii.getBytes(CharsetUtil.UTF_8));
+ ByteBuf buf2 = Unpooled.unreleasableBuffer(ReferenceCountUtil.releaseLater(Unpooled.buffer(16)));
+ assertWrapped(buf2);
+ ByteBufUtil.writeUtf8(buf2, usAscii);
+
+ Assert.assertEquals(buf, buf2);
+ }
+
+ private static void assertWrapped(ByteBuf buf) {
+ Assert.assertTrue(buf instanceof WrappedByteBuf);
+ }
}
| train | train | 2015-10-07T14:15:14 | 2015-10-03T11:26:10Z | scf37 | val |
netty/netty/4357_4367 | netty/netty | netty/netty/4357 | netty/netty/4367 | [
"timestamp(timedelta=15.0, similarity=0.8927650762146052)"
] | 0528118669fd08244cdf84848a96eba83d7651d5 | c1276962f713022908ba3fd45485bc5676a83195 | [
"Will check\n",
"@luengnat ok I think i know where the problem is... working on a fix now :+1: \n",
"Fixed\n",
"Oddly I started encountering this issue while upgrading from jdk 8u60 to 8u65. Not really sure why that would explain this. I was just curious @luengnat do you happen to remember what JDK you were... | [] | 2015-10-16T18:16:34Z | [
"defect"
] | Assertion error in globalEventExecutor | I am using Vertx 3.1 with Netty 4.0.31. Every once in a while, I got an assert on this line:
```
protected final Runnable pollScheduledTask(long nanoTime) {
assert this.inEventLoop();
Queue scheduledTaskQueue = this.scheduledTaskQueue;
ScheduledFutureTask scheduledTask = scheduledTaskQueue == null?null:(ScheduledFutureTask)scheduledTaskQueue.peek();
if(scheduledTask == null) {
return null;
} else if(scheduledTask.deadlineNanos() <= nanoTime) {
scheduledTaskQueue.remove();
return scheduledTask;
} else {
return null;
}
}
```
with the following stack trace:
"globalEventExecutor-1-1"@4,148 in group "main": RUNNING
pollScheduledTask():83, AbstractScheduledEventExecutor {io.netty.util.concurrent}
fetchFromScheduledTaskQueue():114, GlobalEventExecutor {io.netty.util.concurrent}
takeTask():99, GlobalEventExecutor {io.netty.util.concurrent}
run():230, GlobalEventExecutor$TaskRunner {io.netty.util.concurrent}
run():137, DefaultThreadFactory$DefaultRunnableDecorator {io.netty.util.concurrent}
run():745, Thread {java.lang}
The thread that starts GlobalEventExecutor is
"main"@1 in group "main": RUNNING
<clinit>():37, GlobalEventExecutor {io.netty.util.concurrent}
<init>():35, MultithreadEventExecutorGroup {io.netty.util.concurrent}
<init>():49, MultithreadEventLoopGroup {io.netty.channel}
<init>():61, NioEventLoopGroup {io.netty.channel.nio}
<init>():52, NioEventLoopGroup {io.netty.channel.nio}
<init>():125, VertxImpl {io.vertx.core.impl}
<init>():114, VertxImpl {io.vertx.core.impl}
<init>():110, VertxImpl {io.vertx.core.impl}
vertx():34, VertxFactoryImpl {io.vertx.core.impl}
vertx():78, Vertx {io.vertx.core}
| [
"common/src/main/java/io/netty/util/concurrent/GlobalEventExecutor.java"
] | [
"common/src/main/java/io/netty/util/concurrent/GlobalEventExecutor.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/concurrent/GlobalEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/GlobalEventExecutor.java
index 5f18ed8dda7..29d7ad12d36 100644
--- a/common/src/main/java/io/netty/util/concurrent/GlobalEventExecutor.java
+++ b/common/src/main/java/io/netty/util/concurrent/GlobalEventExecutor.java
@@ -218,8 +218,11 @@ public void execute(Runnable task) {
private void startThread() {
if (started.compareAndSet(false, true)) {
Thread t = threadFactory.newThread(taskRunner);
- t.start();
+ // Set the thread before starting it as otherwise inEventLoop() may return false and so produce
+ // an assert error.
+ // See https://github.com/netty/netty/issues/4357
thread = t;
+ t.start();
}
}
| null | train | train | 2015-10-16T20:13:51 | 2015-10-14T04:16:22Z | luengnat | val |
netty/netty/4355_4387 | netty/netty | netty/netty/4355 | netty/netty/4387 | [
"timestamp(timedelta=11.0, similarity=0.8850444964418077)"
] | 40e0fbfcb6d278fa7a5e92c19debad7dd7b31e08 | f2dad1ae3f7a4502d5e577efef70e49cd07ef8ce | [
"@freels hmmm... s you would argue that either the user provides a full initialised TrustManagerFactory and not certs or none and the certs ?\n",
"Yes. This would match the behavior of JdkSslServerContext as well.\n\nIn my case, I am trying to provide a trust manager initialized with a root CA cert that a server ... | [] | 2015-10-23T20:01:47Z | [
"defect"
] | OpenSslServerContext reinitializes the provided TrustManagerFactory with the key cert chain. | AFAICT, this is incorrect behavior, as the provided trust manager factory should already be initialized.
| [
"handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java
index 99fead914bb..e3bf3ac7e4f 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java
@@ -367,16 +367,13 @@ public OpenSslServerContext(
throw new SSLException("failed to set certificate: " + keyCertChainFile + " and " + keyFile, e);
}
try {
- if (trustManagerFactory == null) {
+ if (trustCertChainFile != null) {
+ trustManagerFactory = buildTrustManagerFactory(trustCertChainFile, trustManagerFactory);
+ } else if (trustManagerFactory == null) {
// Mimic the way SSLContext.getInstance(KeyManager[], null, null) works
trustManagerFactory = TrustManagerFactory.getInstance(
TrustManagerFactory.getDefaultAlgorithm());
- }
- if (trustCertChainFile != null) {
- trustManagerFactory = buildTrustManagerFactory(trustCertChainFile, trustManagerFactory);
- } else {
- KeyStore ks = buildKeyStore(keyCertChainFile, keyFile, keyPassword);
- trustManagerFactory.init(ks);
+ trustManagerFactory.init((KeyStore) null);
}
final X509TrustManager manager = chooseTrustManager(trustManagerFactory.getTrustManagers());
@@ -484,16 +481,13 @@ void verify(OpenSslEngine engine, X509Certificate[] peerCerts, String auth)
}
}
try {
- if (trustManagerFactory == null) {
+ if (trustCertChain != null) {
+ trustManagerFactory = buildTrustManagerFactory(trustCertChain, trustManagerFactory);
+ } else if (trustManagerFactory == null) {
// Mimic the way SSLContext.getInstance(KeyManager[], null, null) works
trustManagerFactory = TrustManagerFactory.getInstance(
TrustManagerFactory.getDefaultAlgorithm());
- }
- if (trustCertChain != null) {
- trustManagerFactory = buildTrustManagerFactory(trustCertChain, trustManagerFactory);
- } else {
- KeyStore ks = buildKeyStore(keyCertChain, key, keyPassword.toCharArray());
- trustManagerFactory.init(ks);
+ trustManagerFactory.init((KeyStore) null);
}
final X509TrustManager manager = chooseTrustManager(trustManagerFactory.getTrustManagers());
| null | train | train | 2015-10-23T12:05:15 | 2015-10-14T00:38:57Z | freels | val |
netty/netty/4409_4410 | netty/netty | netty/netty/4409 | netty/netty/4410 | [
"timestamp(timedelta=112.0, similarity=0.9327689825970706)"
] | e4816952670c5d2f6e8c8ffbac8777af4ec459aa | fe7db0f99563ff028d4ac3b32b5edfc3e7f8ae41 | [
"I guess all the native transport related stuff should only be built/triggered when a Linux only maven profile is enabled.\n",
"yes, that's also how the pom file seems to be set up, but idk how this \"conditional dependency\" stuff works in maven.\n",
"aww i think I have a fix.\n",
"@buchgr @normanmaurer CI b... | [
"this is just cause I also got warnings, that referring to props without a `project` prefix is deprecated.\n"
] | 2015-10-29T10:56:14Z | [] | Build fails on OSX | just wanted to build Netty master (mvn clean install) on a clean machine and building `microbench` fails
```
Failed to execute goal on project netty-microbench:
Could not resolve dependencies for project io.netty:netty-microbench:jar:5.0.0.Alpha3-SNAPSHOT:
Failure to find io.netty:netty-transport-native-epoll:jar:osx-x86_64:5.0.0.Alpha3-SNAPSHOT
```
I am running on El Capitan, Java 1.8u60 and Maven 3.3.3.
I am not familiar enough with Maven to fix this quickly, but can look into it if none if you know of a quick resolution either.
| [
"microbench/pom.xml",
"pom.xml"
] | [
"microbench/pom.xml",
"pom.xml"
] | [] | diff --git a/microbench/pom.xml b/microbench/pom.xml
index f9946800489..087f99b0685 100644
--- a/microbench/pom.xml
+++ b/microbench/pom.xml
@@ -68,7 +68,6 @@
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>${project.version}</version>
- <classifier>${epoll.classifier}</classifier>
</dependency>
<dependency>
<groupId>junit</groupId>
diff --git a/pom.xml b/pom.xml
index 222128a3fcb..fb67986e9e5 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1196,11 +1196,11 @@
<archive>
<manifestEntries>
<Bundle-ManifestVersion>2</Bundle-ManifestVersion>
- <Bundle-Name>${name}</Bundle-Name>
- <Bundle-SymbolicName>${groupId}.${artifactId}.source</Bundle-SymbolicName>
- <Bundle-Vendor>${organization.name}</Bundle-Vendor>
+ <Bundle-Name>${project.name}</Bundle-Name>
+ <Bundle-SymbolicName>${project.groupId}.${project.artifactId}.source</Bundle-SymbolicName>
+ <Bundle-Vendor>${project.organization.name}</Bundle-Vendor>
<Bundle-Version>${parsedVersion.osgiVersion}</Bundle-Version>
- <Eclipse-SourceBundle>${groupId}.${artifactId};version="${parsedVersion.osgiVersion}";roots:="."</Eclipse-SourceBundle>
+ <Eclipse-SourceBundle>${project.groupId}.${project.artifactId};version="${parsedVersion.osgiVersion}";roots:="."</Eclipse-SourceBundle>
</manifestEntries>
</archive>
</configuration>
| null | train | train | 2015-10-28T21:55:40 | 2015-10-29T10:25:16Z | buchgr | val |
netty/netty/4395_4414 | netty/netty | netty/netty/4395 | netty/netty/4414 | [
"timestamp(timedelta=26.0, similarity=0.8603650312112185)"
] | c6474f92185fa2ec58c686cce0822983f2ec9af3 | 1ae7a5cc37e1ca58bcd77617e600b78cdaa7f1cd | [
"thanks for reporting @maxwindiff!\n",
"@Scottmitch Does this also fix https://github.com/netty/netty/issues/3450?\n",
"@buchgr yes I think so.\n",
"@buchgr - Based upon your description (same scenario that is described in this issue), I would think this should take care of #3450. Is it possible to verify?\n"... | [
"Static?\n",
"done\n",
"just a nit but do we want to also prefix the new methods with \"test\" as the others in the class ?\n",
"done\n"
] | 2015-10-29T17:23:21Z | [
"duplicate"
] | StackOverflowError when adding listener to DefaultPromise w/ ImmediateEventExecutor | This will produce a `StackOverflowError`:
``` java
@Test
public void testName() throws Exception {
DefaultPromise<Object> promise = new DefaultPromise<>(ImmediateEventExecutor.INSTANCE);
promise.addListener(f1 -> {
promise.addListener(f2 -> System.out.println("done"));
});
promise.tryFailure(new Exception());
}
```
The error may not show up in console but if you add an exception breakpoint in IntelliJ / Eclipse you'll see it.
Reproduces on 4.1.0.Beta7
| [
"common/src/main/java/io/netty/util/concurrent/DefaultPromise.java"
] | [
"common/src/main/java/io/netty/util/concurrent/DefaultPromise.java"
] | [
"common/src/test/java/io/netty/util/concurrent/DefaultPromiseTest.java"
] | diff --git a/common/src/main/java/io/netty/util/concurrent/DefaultPromise.java b/common/src/main/java/io/netty/util/concurrent/DefaultPromise.java
index 4aec5b53061..142b88d5573 100644
--- a/common/src/main/java/io/netty/util/concurrent/DefaultPromise.java
+++ b/common/src/main/java/io/netty/util/concurrent/DefaultPromise.java
@@ -836,7 +836,8 @@ private final class LateListeners extends ArrayDeque<GenericFutureListener<?>> i
@Override
public void run() {
- if (listeners == null) {
+ final EventExecutor executor = executor();
+ if (listeners == null || executor == ImmediateEventExecutor.INSTANCE) {
for (;;) {
GenericFutureListener<?> l = poll();
if (l == null) {
@@ -847,7 +848,7 @@ public void run() {
} else {
// Reschedule until the initial notification is done to avoid the race condition
// where the notification is made in an incorrect order.
- execute(executor(), this);
+ execute(executor, this);
}
}
}
| diff --git a/common/src/test/java/io/netty/util/concurrent/DefaultPromiseTest.java b/common/src/test/java/io/netty/util/concurrent/DefaultPromiseTest.java
index 26f304b09f3..ea7a56e6545 100644
--- a/common/src/test/java/io/netty/util/concurrent/DefaultPromiseTest.java
+++ b/common/src/test/java/io/netty/util/concurrent/DefaultPromiseTest.java
@@ -143,6 +143,38 @@ public void testListenerNotifyLater() throws Exception {
testListenerNotifyLater(2);
}
+ @Test(timeout = 2000)
+ public void testPromiseListenerAddWhenCompleteFailure() throws Exception {
+ testPromiseListenerAddWhenComplete(new RuntimeException());
+ }
+
+ @Test(timeout = 2000)
+ public void testPromiseListenerAddWhenCompleteSuccess() throws Exception {
+ testPromiseListenerAddWhenComplete(null);
+ }
+
+ private static void testPromiseListenerAddWhenComplete(Throwable cause) throws InterruptedException {
+ final CountDownLatch latch = new CountDownLatch(1);
+ final Promise<Void> promise = new DefaultPromise<Void>(ImmediateEventExecutor.INSTANCE);
+ promise.addListener(new FutureListener<Void>() {
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ promise.addListener(new FutureListener<Void>() {
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ latch.countDown();
+ }
+ });
+ }
+ });
+ if (cause == null) {
+ promise.setSuccess(null);
+ } else {
+ promise.setFailure(cause);
+ }
+ latch.await();
+ }
+
private static void testListenerNotifyLater(final int numListenersBefore) throws Exception {
EventExecutor executor = new TestEventExecutor();
int expectedCount = numListenersBefore + 2;
| test | train | 2015-10-29T16:23:25 | 2015-10-26T22:12:04Z | maxwindiff | val |
netty/netty/4402_4417 | netty/netty | netty/netty/4402 | netty/netty/4417 | [
"timestamp(timedelta=432.0, similarity=0.8530380492232348)"
] | 48a3ba86cf9a76609bd26535afe3392ddf85c4d2 | c75fab21e5cba6dffb78bcc7af640c10d7ee5a34 | [
"Thanks for reporting @ninja- !\n\n@normanmaurer - assigned to you.\n",
"@Scottmitch :smiley: I am _squeezing_ every last bit of top hot methods and found lots of interesting things already.\nbtw. recycler.get() was something like 6% for the work and 94% for the thread local madness...I'd use something separate f... | [] | 2015-10-29T20:36:58Z | [] | FastThreadLocal slower than jdk | ```
# Run complete. Total time: 00:01:21
Benchmark Mode Cnt Score Error Units
FastThreadLocalBenchmark.fastThreadLocal thrpt 20 50264,632 ± 6021,328 ops/s
FastThreadLocalBenchmark.jdkThreadLocalGet thrpt 20 83866,710 ± 6483,523 ops/s
Benchmark Mode Cnt Score Error Units
FastThreadLocalBenchmark.fastThreadLocal thrpt 20 33525,425 ± 2633,724 ops/s
FastThreadLocalBenchmark.jdkThreadLocalGet thrpt 20 65947,751 ± 4391,956 ops/s
```
it's slower for a number of reasons including casts, instanceof etc. but why is it here if jdk's one is faster on java 8?
<s>Maybe the current benchmark isn't exactly fair because it doesn't use the "fast threads" to test...but still it's off in "real tests"</s>
surprisingly the test executor is using fast threads so nevermind above line
| [
"microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java"
] | [
"microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java"
] | [] | diff --git a/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java b/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java
index 36816b16bca..c484d1cd5bb 100644
--- a/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java
+++ b/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java
@@ -36,8 +36,8 @@ public class AbstractMicrobenchmark extends AbstractMicrobenchmarkBase {
static {
final String[] customArgs = {
- "-Xms768m", "-Xmx768m", "-XX:MaxDirectMemorySize=768m", "-Dharness.executor=CUSTOM",
- "-Dharness.executor.class=AbstractMicrobenchmark$HarnessExecutor" };
+ "-Xms768m", "-Xmx768m", "-XX:MaxDirectMemorySize=768m", "-Djmh.executor=CUSTOM",
+ "-Djmh.executor.class=io.netty.microbench.util.AbstractMicrobenchmark$HarnessExecutor" };
JVM_ARGS = new String[BASE_JVM_ARGS.length + customArgs.length];
System.arraycopy(BASE_JVM_ARGS, 0, JVM_ARGS, 0, BASE_JVM_ARGS.length);
@@ -59,7 +59,6 @@ protected String[] jvmArgs() {
protected ChainedOptionsBuilder newOptionsBuilder() throws Exception {
ChainedOptionsBuilder runnerOptions = super.newOptionsBuilder();
-
if (getForks() > 0) {
runnerOptions.forks(getForks());
}
| null | val | train | 2015-10-29T19:38:59 | 2015-10-28T17:33:02Z | ninja- | val |
netty/netty/4442_4443 | netty/netty | netty/netty/4442 | netty/netty/4443 | [
"timestamp(timedelta=95230.0, similarity=0.8515996131608482)"
] | fd810e717865f5d6cda27910cf3cfa635388ef1d | 4cc2050f193b0ec30bff9e2ef4c0762999d44bdd | [
"@louiscryan - I think we can close this now?\n"
] | [] | 2015-11-05T21:25:36Z | [] | 4.0 HttpHeader.set(HttpHeader) will cause itself to be emptied when passed self | public HttpHeaders set(HttpHeaders headers) {
if (headers == null) {
throw new NullPointerException("headers");
}
clear();
for (Map.Entry<String, String> e: headers) {
add(e.getKey(), e.getValue());
}
return this;
}
this is in conflict with DefaultHttpHeaders which will throw an exception when doing the same thing so there is inconsistency in the interface contract
| [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java"
] | [
"codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java"
] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java
index 3b3d9eca61b..7e5b986f54a 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java
@@ -77,15 +77,14 @@ public HttpHeaders add(HttpHeaders headers) {
@Override
public HttpHeaders set(HttpHeaders headers) {
if (headers instanceof DefaultHttpHeaders) {
- if (headers == this) {
- throw new IllegalArgumentException("can't add to itself.");
- }
- clear();
- DefaultHttpHeaders defaultHttpHeaders = (DefaultHttpHeaders) headers;
- HeaderEntry e = defaultHttpHeaders.head.after;
- while (e != defaultHttpHeaders.head) {
- add(e.key, e.value);
- e = e.after;
+ if (headers != this) {
+ clear();
+ DefaultHttpHeaders defaultHttpHeaders = (DefaultHttpHeaders) headers;
+ HeaderEntry e = defaultHttpHeaders.head.after;
+ while (e != defaultHttpHeaders.head) {
+ add(e.key, e.value);
+ e = e.after;
+ }
}
return this;
} else {
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java
index dd7a198bf7e..ab18acad1e5 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaders.java
@@ -1617,9 +1617,11 @@ public HttpHeaders set(HttpHeaders headers) {
if (headers == null) {
throw new NullPointerException("headers");
}
- clear();
- for (Map.Entry<String, String> e: headers) {
- add(e.getKey(), e.getValue());
+ if (headers != this) {
+ clear();
+ for (Map.Entry<String, String> e : headers) {
+ add(e.getKey(), e.getValue());
+ }
}
return this;
}
| diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
index e2eadc33cab..21827c7747f 100644
--- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
@@ -74,9 +74,12 @@ public void testAddSelf() {
headers.add(headers);
}
- @Test(expected = IllegalArgumentException.class)
- public void testSetSelf() {
+ @Test
+ public void testSetSelfIsNoOp() {
HttpHeaders headers = new DefaultHttpHeaders(false);
+ headers.add("some", "thing");
headers.set(headers);
+ Assert.assertEquals(1, headers.entries().size());
+ Assert.assertEquals("thing", headers.get("some"));
}
}
| train | train | 2015-11-05T08:51:59 | 2015-11-05T21:01:58Z | louiscryan | val |
netty/netty/4444_4445 | netty/netty | netty/netty/4444 | netty/netty/4445 | [
"timestamp(timedelta=20815.0, similarity=0.9127999973381334)"
] | 202b2dbc89439862c43caf53877cb4bff77c35a4 | fa6b765662381172f6e675b5ac82fa081098548f | [
"@louiscryan can be closed ?\n",
"@louiscryan - Thanks for the fix!\n"
] | [] | 2015-11-05T22:10:04Z | [] | DefaultHttp2Headers has memory leak when being cleared | When calling DefaultHttp2Headers.clear the firstNonPseudoHeaders reference holds onto data that has just been cleared
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2HeadersTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java
index 7a4bfe60435..69b1e17e079 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java
@@ -90,6 +90,12 @@ public DefaultHttp2Headers(boolean validate) {
validate ? HTTP2_NAME_VALIDATOR : NameValidator.NOT_NULL);
}
+ @Override
+ public Http2Headers clear() {
+ this.firstNonPseudo = head;
+ return super.clear();
+ }
+
@Override
public Http2Headers method(CharSequence value) {
set(PseudoHeaderName.METHOD.value(), value);
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2HeadersTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2HeadersTest.java
index 5885ea153ce..eb187c9d4ad 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2HeadersTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2HeadersTest.java
@@ -22,6 +22,7 @@
import java.util.Map.Entry;
+import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
@@ -74,6 +75,16 @@ public void testHeaderNameValidation() {
headers.add(of("Foo"), of("foo"));
}
+ @Test
+ public void testClearResetsPseudoHeaderDivision() {
+ DefaultHttp2Headers http2Headers = new DefaultHttp2Headers();
+ http2Headers.method("POST");
+ http2Headers.set("some", "value");
+ http2Headers.clear();
+ http2Headers.method("GET");
+ assertEquals(1, http2Headers.names().size());
+ }
+
private static void verifyAllPseudoHeadersPresent(Http2Headers headers) {
for (PseudoHeaderName pseudoName : PseudoHeaderName.values()) {
assertNotNull(headers.get(pseudoName.value()));
| train | train | 2015-11-05T08:51:37 | 2015-11-05T21:33:34Z | louiscryan | val |
netty/netty/4446_4447 | netty/netty | netty/netty/4446 | netty/netty/4447 | [
"timestamp(timedelta=20987.0, similarity=0.8665203016177547)"
] | 202b2dbc89439862c43caf53877cb4bff77c35a4 | 3d7bdc905ac6f84bfe108e10c5325c4179d8448e | [
"@louiscryan I think this can be closed ?\n",
"@louiscryan - thanks!\n"
] | [] | 2015-11-05T22:33:11Z | [] | Headers.set(self) and Headers.setAll(self) are inconsistent | The former will throw, the latter is a no-op.
| [
"codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java"
] | [
"codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java"
] | [
"codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java",
"codec/src/test/java/io/netty/handler/codec/DefaultHeadersTest.java"
] | diff --git a/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java b/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java
index 995a9268144..ead9f0a5562 100644
--- a/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java
+++ b/codec/src/main/java/io/netty/handler/codec/DefaultHeaders.java
@@ -539,7 +539,7 @@ public T setShort(K name, short value) {
public T set(Headers<? extends K, ? extends V, ?> headers) {
checkNotNull(headers, "headers");
if (headers == this) {
- throw new IllegalArgumentException("can't add to itself.");
+ return thisT();
}
clear();
if (headers instanceof DefaultHeaders) {
| diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
index bfd93b5143d..a4ab3eb3335 100644
--- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpHeadersTest.java
@@ -76,9 +76,11 @@ public void testAddSelf() {
headers.add(headers);
}
- @Test(expected = IllegalArgumentException.class)
- public void testSetSelf() {
+ @Test
+ public void testSetSelfIsNoOp() {
HttpHeaders headers = new DefaultHttpHeaders(false);
+ headers.add("name", "value");
headers.set(headers);
+ assertEquals(1, headers.size());
}
}
diff --git a/codec/src/test/java/io/netty/handler/codec/DefaultHeadersTest.java b/codec/src/test/java/io/netty/handler/codec/DefaultHeadersTest.java
index 5f8472c3854..ceee0b80be6 100644
--- a/codec/src/test/java/io/netty/handler/codec/DefaultHeadersTest.java
+++ b/codec/src/test/java/io/netty/handler/codec/DefaultHeadersTest.java
@@ -390,9 +390,11 @@ public void testAddSelf() {
headers.add(headers);
}
- @Test(expected = IllegalArgumentException.class)
- public void testSetSelf() {
+ @Test
+ public void testSetSelfIsNoOp() {
TestDefaultHeaders headers = newInstance();
+ headers.add("name", "value");
headers.set(headers);
+ assertEquals(1, headers.size());
}
}
| train | train | 2015-11-05T08:51:37 | 2015-11-05T22:31:41Z | louiscryan | val |
netty/netty/3972_4455 | netty/netty | netty/netty/3972 | netty/netty/4455 | [
"timestamp(timedelta=22.0, similarity=0.873253803792159)"
] | 035053be4ad0e1961b434eedd9f93946ee119463 | 5e52720108c9b6a4a286c7354dfd982d3cb75295 | [
"I didn't expect people to produce such amount of load obviously. Let me get the query ID allocation done on per-server basis rather than on per-resolver basis.\n",
"Thanks. If you want to keep an array structure, perhaps size could be a configurable setting? In either case it should be possible to get the origin... | [
"We should not do this as this allows this to escape from the constructor and so we may see not fully initialise instances .\n",
"can't we use a ConcurrentMap here ?\n",
"We can't because we insert multiple entries.\n",
"I guess that's not gonna happen because `DnsQueryContext` is only accessed later when a m... | 2015-11-08T05:49:55Z | [
"defect"
] | DNS query ID space exhaustion | In running big DNS resolution loads with very high parallellism and logging turned on I'm running into query ID space exhaustion (logged messages below).
As I've reviewed the Netty DNS code, I see there is a hard limit of 65536 queries max (in reality a lot less because there would be too many collisions to obtain an ID even with less concurrency).
Wouldn't it make sense to replace the AtomicReferenceArray that holds the promises with a ConcurrentHashMap that could grow as large as necessary? It could also be keyed by the original DnsQuestion, making it _much_ easier to troubleshoot the cases when the query fails and the original question gets lost somewhere. Note, even the current code seems to be doing something wrong, as I have also received warnings about DNS responses with an unknown ID (details below).
Perhaps the choice of the promise collection between AtomicReferenceArray vs. ConcurrentHashMap could be exposed as a configuration option to the user, if the original behavior is worth preserving
```
2015-07-02 13:39:53,041 WARN [epollEventLoopGroup-2-1] dns.DnsNameResolver: Received a DNS response with an unknown ID: 55787
```
```
2015-07-02 14:45:02,108 WARN [epollEventLoopGroup-2-1] concurrent.DefaultPromise: An exception was thrown by com.test.XXXXXXXXXXXXXXXX
java.lang.IllegalStateException: query ID space exhausted: DefaultDnsQuestion(eske.net IN MX)
at io.netty.resolver.dns.DnsQueryContext.allocateId(DnsQueryContext.java:92)
at io.netty.resolver.dns.DnsQueryContext.<init>(DnsQueryContext.java:70)
at io.netty.resolver.dns.DnsNameResolver.query0(DnsNameResolver.java:735)
at io.netty.resolver.dns.DnsNameResolver.query(DnsNameResolver.java:694)
at io.netty.resolver.dns.DnsNameResolver.query(DnsNameResolver.java:662)
...
2015-07-02 14:45:02,060 WARN [epollEventLoopGroup-2-1] concurrent.DefaultPromise: An exception was thrown by com.test.XXXXXXXXXXXXXXXX
java.lang.IllegalStateException: query ID space exhausted: DefaultDnsQuestion(talkingrockcommunications.com IN MX)
at io.netty.resolver.dns.DnsQueryContext.allocateId(DnsQueryContext.java:92)
at io.netty.resolver.dns.DnsQueryContext.<init>(DnsQueryContext.java:70)
at io.netty.resolver.dns.DnsNameResolver.query0(DnsNameResolver.java:735)
at io.netty.resolver.dns.DnsNameResolver.query(DnsNameResolver.java:694)
at io.netty.resolver.dns.DnsNameResolver.query(DnsNameResolver.java:662)
...
2015-07-02 14:45:01,529 WARN [epollEventLoopGroup-2-1] concurrent.DefaultPromise: An exception was thrown by com.test.XXXXXXXXXXXXXXXX
java.lang.IllegalStateException: query ID space exhausted: DefaultDnsQuestion(mailin2.allsupinc.com IN A)
at io.netty.resolver.dns.DnsQueryContext.allocateId(DnsQueryContext.java:92)
at io.netty.resolver.dns.DnsQueryContext.<init>(DnsQueryContext.java:70)
at io.netty.resolver.dns.DnsNameResolver.query0(DnsNameResolver.java:735)
at io.netty.resolver.dns.DnsNameResolver.query(DnsNameResolver.java:694)
at io.netty.resolver.dns.DnsNameResolver.query(DnsNameResolver.java:662)
```
| [
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java",
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java"
] | [
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java",
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java",
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContextManager.java"
] | [] | diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java
index 9c3b7a1d2bc..dce3e66d59b 100644
--- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java
+++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolver.java
@@ -37,7 +37,6 @@
import io.netty.resolver.SimpleNameResolver;
import io.netty.util.NetUtil;
import io.netty.util.ReferenceCountUtil;
-import io.netty.util.collection.IntObjectHashMap;
import io.netty.util.concurrent.FastThreadLocal;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.Promise;
@@ -59,9 +58,8 @@
import java.util.Map.Entry;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicReferenceArray;
-import static io.netty.util.internal.ObjectUtil.*;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
/**
* A DNS-based {@link NameResolver}.
@@ -95,11 +93,9 @@ public class DnsNameResolver extends SimpleNameResolver<InetSocketAddress> {
final DatagramChannel ch;
/**
- * An array whose index is the ID of a DNS query and whose value is the promise of the corresponsing response. We
- * don't use {@link IntObjectHashMap} or map-like data structure here because 64k elements are fairly small, which
- * is only about 512KB.
+ * Manages the {@link DnsQueryContext}s in progress and their query IDs.
*/
- final AtomicReferenceArray<DnsQueryContext> promises = new AtomicReferenceArray<DnsQueryContext>(65536);
+ final DnsQueryContextManager queryContextManager = new DnsQueryContextManager();
/**
* Cache for {@link #doResolve(InetSocketAddress, Promise)} and {@link #doResolveAll(InetSocketAddress, Promise)}.
@@ -937,7 +933,7 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception
logger.debug("{} RECEIVED: [{}: {}], {}", ch, queryId, res.sender(), res);
}
- final DnsQueryContext qCtx = promises.get(queryId);
+ final DnsQueryContext qCtx = queryContextManager.get(res.sender(), queryId);
if (qCtx == null) {
if (logger.isWarnEnabled()) {
logger.warn("{} Received a DNS response with an unknown ID: {}", ch, queryId);
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java
index 1d1d96c35c1..8217c82ea62 100644
--- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java
+++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContext.java
@@ -31,7 +31,6 @@
import io.netty.util.concurrent.ScheduledFuture;
import io.netty.util.internal.OneTimeTask;
import io.netty.util.internal.StringUtil;
-import io.netty.util.internal.ThreadLocalRandom;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
@@ -60,9 +59,9 @@ final class DnsQueryContext {
this.nameServerAddr = nameServerAddr;
this.question = question;
this.promise = promise;
-
- id = allocateId();
recursionDesired = parent.isRecursionDesired();
+ id = parent.queryContextManager.add(this);
+
if (parent.isOptResourceEnabled()) {
optResource = new DefaultDnsRawRecord(
StringUtil.EMPTY_STRING, DnsRecordType.OPT, parent.maxPayloadSize(), 0, Unpooled.EMPTY_BUFFER);
@@ -71,25 +70,17 @@ final class DnsQueryContext {
}
}
- private int allocateId() {
- int id = ThreadLocalRandom.current().nextInt(parent.promises.length());
- final int maxTries = parent.promises.length() << 1;
- int tries = 0;
- for (;;) {
- if (parent.promises.compareAndSet(id, null, this)) {
- return id;
- }
-
- id = id + 1 & 0xFFFF;
+ InetSocketAddress nameServerAddr() {
+ return nameServerAddr;
+ }
- if (++ tries >= maxTries) {
- throw new IllegalStateException("query ID space exhausted: " + question);
- }
- }
+ DnsQuestion question() {
+ return question;
}
void query() {
- final DnsQuestion question = this.question;
+ final DnsQuestion question = question();
+ final InetSocketAddress nameServerAddr = nameServerAddr();
final DatagramDnsQuery query = new DatagramDnsQuery(null, nameServerAddr, id);
query.setRecursionDesired(recursionDesired);
query.setRecord(DnsSection.QUESTION, question);
@@ -159,13 +150,13 @@ public void run() {
}
void finish(AddressedEnvelope<? extends DnsResponse, InetSocketAddress> envelope) {
- DnsResponse res = envelope.content();
+ final DnsResponse res = envelope.content();
if (res.count(DnsSection.QUESTION) != 1) {
logger.warn("Received a DNS response with invalid number of questions: {}", envelope);
return;
}
- if (!question.equals(res.recordAt(DnsSection.QUESTION))) {
+ if (!question().equals(res.recordAt(DnsSection.QUESTION))) {
logger.warn("Received a mismatching DNS response: {}", envelope);
return;
}
@@ -174,7 +165,7 @@ void finish(AddressedEnvelope<? extends DnsResponse, InetSocketAddress> envelope
}
private void setSuccess(AddressedEnvelope<? extends DnsResponse, InetSocketAddress> envelope) {
- parent.promises.set(id, null);
+ parent.queryContextManager.remove(nameServerAddr(), id);
// Cancel the timeout task.
final ScheduledFuture<?> timeoutFuture = this.timeoutFuture;
@@ -192,7 +183,8 @@ private void setSuccess(AddressedEnvelope<? extends DnsResponse, InetSocketAddre
}
private void setFailure(String message, Throwable cause) {
- parent.promises.set(id, null);
+ final InetSocketAddress nameServerAddr = nameServerAddr();
+ parent.queryContextManager.remove(nameServerAddr, id);
final StringBuilder buf = new StringBuilder(message.length() + 64);
buf.append('[')
@@ -203,9 +195,9 @@ private void setFailure(String message, Throwable cause) {
final DnsNameResolverException e;
if (cause != null) {
- e = new DnsNameResolverException(nameServerAddr, question, buf.toString(), cause);
+ e = new DnsNameResolverException(nameServerAddr, question(), buf.toString(), cause);
} else {
- e = new DnsNameResolverException(nameServerAddr, question, buf.toString());
+ e = new DnsNameResolverException(nameServerAddr, question(), buf.toString());
}
promise.tryFailure(e);
diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContextManager.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContextManager.java
new file mode 100644
index 00000000000..9c3946c72fb
--- /dev/null
+++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsQueryContextManager.java
@@ -0,0 +1,148 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.netty.resolver.dns;
+
+import io.netty.util.NetUtil;
+import io.netty.util.collection.IntObjectHashMap;
+import io.netty.util.collection.IntObjectMap;
+import io.netty.util.internal.ThreadLocalRandom;
+
+import java.net.Inet4Address;
+import java.net.Inet6Address;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.UnknownHostException;
+import java.util.HashMap;
+import java.util.Map;
+
+final class DnsQueryContextManager {
+
+ /**
+ * A map whose key is the DNS server address and value is the map of the DNS query ID and its corresponding
+ * {@link DnsQueryContext}.
+ */
+ final Map<InetSocketAddress, IntObjectMap<DnsQueryContext>> map =
+ new HashMap<InetSocketAddress, IntObjectMap<DnsQueryContext>>();
+
+ int add(DnsQueryContext qCtx) {
+ final IntObjectMap<DnsQueryContext> contexts = getOrCreateContextMap(qCtx.nameServerAddr());
+
+ int id = ThreadLocalRandom.current().nextInt(1, 65536);
+ final int maxTries = 65535 << 1;
+ int tries = 0;
+
+ synchronized (contexts) {
+ for (;;) {
+ if (!contexts.containsKey(id)) {
+ contexts.put(id, qCtx);
+ return id;
+ }
+
+ id = id + 1 & 0xFFFF;
+
+ if (++tries >= maxTries) {
+ throw new IllegalStateException("query ID space exhausted: " + qCtx.question());
+ }
+ }
+ }
+ }
+
+ DnsQueryContext get(InetSocketAddress nameServerAddr, int id) {
+ final IntObjectMap<DnsQueryContext> contexts = getContextMap(nameServerAddr);
+ final DnsQueryContext qCtx;
+ if (contexts != null) {
+ synchronized (contexts) {
+ qCtx = contexts.get(id);
+ }
+ } else {
+ qCtx = null;
+ }
+
+ return qCtx;
+ }
+
+ DnsQueryContext remove(InetSocketAddress nameServerAddr, int id) {
+ final IntObjectMap<DnsQueryContext> contexts = getContextMap(nameServerAddr);
+ if (contexts == null) {
+ return null;
+ }
+
+ synchronized (contexts) {
+ return contexts.remove(id);
+ }
+ }
+
+ private IntObjectMap<DnsQueryContext> getContextMap(InetSocketAddress nameServerAddr) {
+ synchronized (map) {
+ return map.get(nameServerAddr);
+ }
+ }
+
+ private IntObjectMap<DnsQueryContext> getOrCreateContextMap(InetSocketAddress nameServerAddr) {
+ synchronized (map) {
+ final IntObjectMap<DnsQueryContext> contexts = map.get(nameServerAddr);
+ if (contexts != null) {
+ return contexts;
+ }
+
+ final IntObjectMap<DnsQueryContext> newContexts = new IntObjectHashMap<DnsQueryContext>();
+ final InetAddress a = nameServerAddr.getAddress();
+ final int port = nameServerAddr.getPort();
+ map.put(nameServerAddr, newContexts);
+
+ if (a instanceof Inet4Address) {
+ // Also add the mapping for the IPv4-compatible IPv6 address.
+ final Inet4Address a4 = (Inet4Address) a;
+ if (a4.isLoopbackAddress()) {
+ map.put(new InetSocketAddress(NetUtil.LOCALHOST6, port), newContexts);
+ } else {
+ map.put(new InetSocketAddress(toCompatAddress(a4), port), newContexts);
+ }
+ } else if (a instanceof Inet6Address) {
+ // Also add the mapping for the IPv4 address if this IPv6 address is compatible.
+ final Inet6Address a6 = (Inet6Address) a;
+ if (a6.isLoopbackAddress()) {
+ map.put(new InetSocketAddress(NetUtil.LOCALHOST4, port), newContexts);
+ } else if (a6.isIPv4CompatibleAddress()) {
+ map.put(new InetSocketAddress(toIPv4Address(a6), port), newContexts);
+ }
+ }
+
+ return newContexts;
+ }
+ }
+
+ private static Inet6Address toCompatAddress(Inet4Address a4) {
+ byte[] b4 = a4.getAddress();
+ byte[] b6 = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, b4[0], b4[1], b4[2], b4[3] };
+ try {
+ return (Inet6Address) InetAddress.getByAddress(b6);
+ } catch (UnknownHostException e) {
+ throw new Error(e);
+ }
+ }
+
+ private static Inet4Address toIPv4Address(Inet6Address a6) {
+ byte[] b6 = a6.getAddress();
+ byte[] b4 = { b6[12], b6[13], b6[14], b6[15] };
+ try {
+ return (Inet4Address) InetAddress.getByAddress(b4);
+ } catch (UnknownHostException e) {
+ throw new Error(e);
+ }
+ }
+}
| null | train | train | 2015-11-07T19:15:20 | 2015-07-11T06:26:39Z | dmk23 | val |
netty/netty/4458_4463 | netty/netty | netty/netty/4458 | netty/netty/4463 | [
"timestamp(timedelta=30.0, similarity=0.9127479533480994)"
] | 120ffaf880b81c260141c9e46cec239ac235aeb9 | 14f1aa384bf33db413b3bd2e3f286a41be838625 | [
"No this is a bug\n\n> Am 08.11.2015 um 22:03 schrieb Brendt notifications@github.com:\n> \n> Netty Version: master (latest snapshot a6816bd)\n> \n> I've notice that if a DefaultFullHttpRequest or DefaultFullHttpResponse have been released, then calls to hashCode() will throw an IllegalRefCountException. I was wond... | [
"nit: `e` -> `ignore`\n",
"nit: `e` -> `ignore`\n"
] | 2015-11-09T20:02:40Z | [
"defect"
] | DefaultFullHttp(Request|Response).hashCode() throws IllegalRefCountException | Netty Version: master (latest snapshot a6816bd59ef2923d82af5b7a7f3b722507a56a3a)
I've notice that if a `DefaultFullHttpRequest` or `DefaultFullHttpResponse` have been released, then calls to `hashCode()` will throw an `IllegalRefCountException` [here](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java#L167) and [here](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java#L170). I was wondering if this is considered the correct behaviour?
| [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java",
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java",
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java
index 87b2cf5e125..2ca9329e7c6 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java
@@ -17,15 +17,19 @@
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
+import io.netty.util.IllegalReferenceCountException;
/**
* Default implementation of {@link FullHttpRequest}.
*/
public class DefaultFullHttpRequest extends DefaultHttpRequest implements FullHttpRequest {
- private static final int HASH_CODE_PRIME = 31;
private final ByteBuf content;
private final HttpHeaders trailingHeader;
private final boolean validateHeaders;
+ /**
+ * Used to cache the value of the hash code and avoid {@link IllegalRefCountException}.
+ */
+ private int hash;
public DefaultFullHttpRequest(HttpVersion httpVersion, HttpMethod method, String uri) {
this(httpVersion, method, uri, Unpooled.buffer(0));
@@ -163,11 +167,23 @@ public FullHttpRequest duplicate() {
@Override
public int hashCode() {
- int result = 1;
- result = HASH_CODE_PRIME * result + content().hashCode();
- result = HASH_CODE_PRIME * result + trailingHeaders().hashCode();
- result = HASH_CODE_PRIME * result + super.hashCode();
- return result;
+ int hash = this.hash;
+ if (hash == 0) {
+ if (content().refCnt() != 0) {
+ try {
+ hash = 31 + content().hashCode();
+ } catch (IllegalReferenceCountException ignored) {
+ // Handle race condition between checking refCnt() == 0 and using the object.
+ hash = 31;
+ }
+ } else {
+ hash = 31;
+ }
+ hash = 31 * hash + trailingHeaders().hashCode();
+ hash = 31 * hash + super.hashCode();
+ this.hash = hash;
+ }
+ return hash;
}
@Override
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java
index 36ac176d640..5ad6058daf9 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java
@@ -15,9 +15,11 @@
*/
package io.netty.handler.codec.http;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
+import io.netty.util.IllegalReferenceCountException;
+
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
/**
* Default implementation of a {@link FullHttpResponse}.
@@ -27,6 +29,10 @@ public class DefaultFullHttpResponse extends DefaultHttpResponse implements Full
private final ByteBuf content;
private final HttpHeaders trailingHeaders;
private final boolean validateHeaders;
+ /**
+ * Used to cache the value of the hash code and avoid {@link IllegalRefCountException}.
+ */
+ private int hash;
public DefaultFullHttpResponse(HttpVersion version, HttpResponseStatus status) {
this(version, status, Unpooled.buffer(0));
@@ -164,6 +170,40 @@ public FullHttpResponse duplicate() {
return duplicate;
}
+ @Override
+ public int hashCode() {
+ int hash = this.hash;
+ if (hash == 0) {
+ if (content().refCnt() != 0) {
+ try {
+ hash = 31 + content().hashCode();
+ } catch (IllegalReferenceCountException ignored) {
+ // Handle race condition between checking refCnt() == 0 and using the object.
+ hash = 31;
+ }
+ } else {
+ hash = 31;
+ }
+ hash = 31 * hash + trailingHeaders().hashCode();
+ hash = 31 * hash + super.hashCode();
+ this.hash = hash;
+ }
+ return hash;
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ if (!(o instanceof DefaultFullHttpResponse)) {
+ return false;
+ }
+
+ DefaultFullHttpResponse other = (DefaultFullHttpResponse) o;
+
+ return super.equals(other) &&
+ content().equals(other.content()) &&
+ trailingHeaders().equals(other.trailingHeaders());
+ }
+
@Override
public String toString() {
return HttpMessageUtil.appendFullResponse(new StringBuilder(256), this).toString();
| null | train | train | 2015-11-10T00:25:13 | 2015-11-09T06:03:02Z | blucas | val |
netty/netty/4484_4510 | netty/netty | netty/netty/4484 | netty/netty/4510 | [
"timestamp(timedelta=175.0, similarity=0.9108850013441069)"
] | dbaeb3314e32b7f707fd4d2ea8ee78abb370ed59 | 32903a3d6dcbd0364e1900e1306d840cdb0f8da8 | [
"Will check soon\n\n> Am 18.11.2015 um 14:01 schrieb Stephane Landelle notifications@github.com:\n> \n> Netty 4.0.33 (client)\n> Jetty 9.3 (server)\n> OSX 10.11.1\n> Hotspot 1.8.0_66\n> AUTO_READ is set to false, read is manually triggered from channelActive and readComplete \n> Hi,\n> \n> I ran into an issue while... | [] | 2015-11-24T23:22:37Z | [
"cleanup"
] | ChannelOption.AUTO_CLOSE javadoc default value incorrect | - Netty 4.0.33 (client)
- Jetty 9.3 (server)
- OSX 10.11.1
- Hotspot 1.8.0_66
Hi,
I ran into an issue while trying to upgrade AsyncHttpClient test suite to Jetty 9.3, see https://github.com/AsyncHttpClient/async-http-client/issues/1035.
The test is about sending a small `ChunkedStream` to a Jetty server expecting Basic auth, without providing the Authorization header. In this case, Jetty will respond with 401 and then close the socket as soon as it receives the headers and realize Authorization is missing.
It seems that Netty (or JDK NIO?) randomly fails to read the HTTP response and directly jumps to closing the socket.
It checked with Wireshark and Jetty indeed sends the expected 401 response first before FIN.
Please have a look at this reproducer (**client is pure Netty, not AHC**): https://github.com/slandelle/netty-missing-events-issue
Note that running from `mvn test` from the terminal seems to increase the failure likelihood.
Regards
| [
"transport/src/main/java/io/netty/channel/ChannelOption.java"
] | [
"transport/src/main/java/io/netty/channel/ChannelOption.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/ChannelOption.java b/transport/src/main/java/io/netty/channel/ChannelOption.java
index 08af650cfaa..1695631f50c 100644
--- a/transport/src/main/java/io/netty/channel/ChannelOption.java
+++ b/transport/src/main/java/io/netty/channel/ChannelOption.java
@@ -33,7 +33,6 @@
public class ChannelOption<T> extends AbstractConstant<ChannelOption<T>> {
private static final ConstantPool<ChannelOption<Object>> pool = new ConstantPool<ChannelOption<Object>>() {
- @SuppressWarnings("deprecation")
@Override
protected ChannelOption<Object> newConstant(int id, String name) {
return new ChannelOption<Object>(id, name);
@@ -91,8 +90,8 @@ public static <T> ChannelOption<T> newInstance(String name) {
/**
* @deprecated From version 5.0, {@link Channel} will not be closed on write failure.
*
- * {@code true} if and only if the {@link Channel} is closed automatically and immediately on write failure.
- * The default is {@code false}.
+ * If {@code true} then the {@link Channel} is closed automatically and immediately on write failure.
+ * The default value is {@code true}.
*/
@Deprecated
public static final ChannelOption<Boolean> AUTO_CLOSE = valueOf("AUTO_CLOSE");
| null | test | train | 2015-11-24T20:44:06 | 2015-11-18T22:01:00Z | slandelle | val |
netty/netty/4504_4524 | netty/netty | netty/netty/4504 | netty/netty/4524 | [
"timestamp(timedelta=4.0, similarity=0.8464062515270071)"
] | 0ec34b5f761f88bbb88def1216f5bae64a0a2d4b | 333bfa6aeee4878ff053aef05b6313f34273d22c | [
"I guess the root cause of this issue is at `HttpClientCodec.upgradeFrom()`, which removes the `Decoder` immediately. Perhaps we should remove it using `ctx.executor().execute()`?\n",
"To add a bit more, I'm using 101 to upgrade from HTTP/1 to HTTP/2. I sometimes get an `IllegalReferenceCountException`, a `First ... | [] | 2015-11-29T04:41:57Z | [
"defect"
] | IllegalReferenceCountException from HttpObjectDecoder when switching a protocol | I found `HttpObjectDecoder.decode()` raises an `IllegalReferenceCountException` at the `case UPGRADED` block. After pulling my hairs, I found that it can happen when a user removes the `HttpObjectDecoder` from the pipeline via `HttpClientUpgradeHandler`. The following buffer access trace should be a good hint:
```
Recent access records: 5
#5:
io.netty.buffer.AdvancedLeakAwareByteBuf.readBytes(AdvancedLeakAwareByteBuf.java:434)
io.netty.handler.codec.ByteToMessageDecoder.handlerRemoved(ByteToMessageDecoder.java:214)
io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:629)
io.netty.channel.DefaultChannelPipeline.callHandlerRemoved(DefaultChannelPipeline.java:623)
io.netty.channel.DefaultChannelPipeline.remove0(DefaultChannelPipeline.java:452)
io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:423)
io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:412)
io.netty.handler.codec.http.HttpClientCodec.upgradeFrom(HttpClientCodec.java:95)
io.netty.handler.codec.http.HttpClientUpgradeHandler.decode(HttpClientUpgradeHandler.java:230)
io.netty.handler.codec.http.HttpClientUpgradeHandler.decode(HttpClientUpgradeHandler.java:38)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:950)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
java.lang.Thread.run(Thread.java:745)
```
It basically means:
1. HttpObjectDecoder decodes the upgrade response from the server. The HttpObjectDecoder enters the UPGRADED state
2. HttpClientUpgradeHandler upgrades the protocol and removes the HttpObjectDecoder from the pipeline. HttpObjectDecoder.handlerRemoved() releases the buffer, but we are still at HttpObjectDecoder.decode().
3. A user gets an IllegalReferenceCountException.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java
index cae9055b29c..c19da6fec48 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientCodec.java
@@ -19,7 +19,9 @@
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerAppender;
import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPipeline;
import io.netty.handler.codec.PrematureChannelClosureException;
+import io.netty.util.internal.OneTimeTask;
import java.util.ArrayDeque;
import java.util.List;
@@ -92,8 +94,15 @@ public HttpClientCodec(
*/
@Override
public void upgradeFrom(ChannelHandlerContext ctx) {
- ctx.pipeline().remove(Decoder.class);
- ctx.pipeline().remove(Encoder.class);
+ final ChannelPipeline p = ctx.pipeline();
+ // Remove the decoder later so that the decoder can enter the 'UPGRADED' state and forward the remaining data.
+ ctx.executor().execute(new OneTimeTask() {
+ @Override
+ public void run() {
+ p.remove(decoder());
+ }
+ });
+ p.remove(encoder());
}
/**
| null | train | train | 2015-11-26T22:56:00 | 2015-11-23T08:12:30Z | trustin | val |
netty/netty/4508_4525 | netty/netty | netty/netty/4508 | netty/netty/4525 | [
"timestamp(timedelta=14.0, similarity=0.8718083460186403)"
] | 0ec34b5f761f88bbb88def1216f5bae64a0a2d4b | a4489b24020c510aab7fac73356d9a5f8cc32cc4 | [
"Seems like we should use `HttpHeaderValues` here. The specification calls out the lowercase variant [1]. Should we change `HttpHeaderValues.UPGRADE` to lowercase, and should/are we using case insensitive compare when evaluating this header value?\n\n[1] https://tools.ietf.org/html/rfc7230#section-6.7\n\n> When Upg... | [] | 2015-11-29T04:48:33Z | [
"defect"
] | HttpClientUpgradeHandler.setUpgradeHeaders() append a wrong 'upgrade'. | See [here](https://github.com/netty/netty/blob/92dee9e500bc141162e6c3a6ef91d8d22e50c449/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java#L231).
It has to be `HttpHeaderValues.UPGRADE` instead. Most servers should be OK with `HttpHeaderNames.UPGRADE` because they do case-insensitive comparisons, but some (e.g. Jetty) requires it to be `Upgrade` (no lowercase u).
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java
index b7b3fda8e93..f8c452de24a 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpClientUpgradeHandler.java
@@ -266,7 +266,7 @@ private void setUpgradeRequestHeaders(ChannelHandlerContext ctx, HttpRequest req
builder.append(part);
builder.append(',');
}
- builder.append(HttpHeaderNames.UPGRADE);
+ builder.append(HttpHeaderValues.UPGRADE);
request.headers().set(HttpHeaderNames.CONNECTION, builder.toString());
}
}
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java
index 9c75473acc5..b06fad57169 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java
@@ -193,9 +193,9 @@ public final class HttpHeaderValues {
*/
public static final AsciiString TRAILERS = new AsciiString("trailers");
/**
- * {@code "Upgrade"}
+ * {@code "upgrade"}
*/
- public static final AsciiString UPGRADE = new AsciiString("Upgrade");
+ public static final AsciiString UPGRADE = new AsciiString("upgrade");
/**
* {@code "websocket"}
*/
| null | train | train | 2015-11-26T22:56:00 | 2015-11-24T08:55:57Z | trustin | val |
netty/netty/3852_4535 | netty/netty | netty/netty/3852 | netty/netty/4535 | [
"timestamp(timedelta=110249.0, similarity=0.8467921632658658)"
] | 613c8b22e1bd6a8adfe171d641b830a4382d021c | 64c369f82228c0a3c1417540468e37a2fe0d49e0 | [
"@louiscryan - Let me summarize the face value tradeoffs and follow up with some questions. Feel free to elaborate. It seems like goal is to potentially reduce the amount of data frames written (less data frame header overhead on the wire) at the cost of an extra iteration step before writing out data frames (unt... | [] | 2015-12-04T19:35:43Z | [] | HTTP2 should coalesce small writes into a single DATA frame | @Scottmitch @nmittler
This is the analog of
https://github.com/netty/netty/issues/3737
but is probably more useful in general though possibly more complicated to implement. The goal is to allow for more small data writes to fit into a single DATA frame. Given that we already implement a queue of DATA & HEADER frames in the outbound flow-controller we could just merge contiguous DATA frames in writePendingBytes when flushing the channel to achieve the desired effect.
The complication is the relative ordering of non flow-controlled frames such as RST_STREAM relative to those in the flow-controller. In this case we can address by simply draining the flow-controller for the relevant stream and then writing the RST.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
index 53066118b2a..4c410e52e36 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
@@ -15,6 +15,13 @@
package io.netty.handler.codec.http2;
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPromise;
+import io.netty.handler.codec.http2.Http2CodecUtil.SimpleChannelPromiseAggregator;
+import io.netty.handler.codec.http2.Http2FrameWriter.Configuration;
+
import static io.netty.buffer.Unpooled.directBuffer;
import static io.netty.buffer.Unpooled.unmodifiableBuffer;
import static io.netty.buffer.Unpooled.unreleasableBuffer;
@@ -53,16 +60,8 @@
import static io.netty.handler.codec.http2.Http2FrameTypes.SETTINGS;
import static io.netty.handler.codec.http2.Http2FrameTypes.WINDOW_UPDATE;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
-import static java.lang.Math.max;
import static java.lang.Math.min;
-import io.netty.buffer.ByteBuf;
-import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelHandlerContext;
-import io.netty.channel.ChannelPromise;
-import io.netty.handler.codec.http2.Http2CodecUtil.SimpleChannelPromiseAggregator;
-import io.netty.handler.codec.http2.Http2FrameWriter.Configuration;
-
/**
* A {@link Http2FrameWriter} that supports all frame types defined by the HTTP/2 specification.
*/
@@ -122,55 +121,71 @@ public void close() { }
@Override
public ChannelFuture writeData(ChannelHandlerContext ctx, int streamId, ByteBuf data,
- int padding, boolean endStream, ChannelPromise promise) {
+ final int padding, boolean endStream, ChannelPromise promise) {
final SimpleChannelPromiseAggregator promiseAggregator =
new SimpleChannelPromiseAggregator(promise, ctx.channel(), ctx.executor());
- final DataFrameHeader header = new DataFrameHeader(ctx, streamId);
- boolean needToReleaseHeaders = true;
- boolean needToReleaseData = true;
+ int i = 0;
+ ByteBuf headerBuffer = null;
+ final Http2Flags flags = new Http2Flags().paddingPresent(padding > 0);
+ final int dataPerFrame = maxFrameSize - (padding + flags.getPaddingPresenceFieldLength());
+ final int numFrames = data.readableBytes() / dataPerFrame + (data.readableBytes() % dataPerFrame == 0 ? 0 : 1);
+
try {
verifyStreamId(streamId, STREAM_ID);
verifyPadding(padding);
- boolean lastFrame;
- int remainingData = data.readableBytes();
- do {
- // Determine how much data and padding to write in this frame. Put all padding at the end.
- int frameDataBytes = min(remainingData, maxFrameSize);
- int framePaddingBytes = min(padding, max(0, (maxFrameSize - 1) - frameDataBytes));
-
- // Decrement the remaining counters.
- padding -= framePaddingBytes;
- remainingData -= frameDataBytes;
-
- // Determine whether or not this is the last frame to be sent.
- lastFrame = remainingData == 0 && padding == 0;
-
- // Only the last frame is not retained. Until then, the outer finally must release.
- ByteBuf frameHeader = header.slice(frameDataBytes, framePaddingBytes, lastFrame && endStream);
- needToReleaseHeaders = !lastFrame;
- ctx.write(lastFrame ? frameHeader : frameHeader.retain(), promiseAggregator.newPromise());
-
- // Write the frame data.
- ByteBuf frameData = data.readSlice(frameDataBytes);
- // Only the last frame is not retained. Until then, the outer finally must release.
- needToReleaseData = !lastFrame;
- ctx.write(lastFrame ? frameData : frameData.retain(), promiseAggregator.newPromise());
-
- // Write the frame padding.
- if (framePaddingBytes > 0) {
- ctx.write(ZERO_BUFFER.slice(0, framePaddingBytes), promiseAggregator.newPromise());
+ // Initialize the header portion of the frame which will be shared by frames (0, n).
+ // * 2 because we may have at most 2 different headers frames.
+ headerBuffer = ctx.alloc().buffer(DATA_FRAME_HEADER_LENGTH << 1);
+ final int numFramesLessOne = numFrames - 1;
+ if (i < numFramesLessOne) {
+ writeFrameHeaderInternal(headerBuffer,
+ dataPerFrame + padding + flags.getPaddingPresenceFieldLength(),
+ DATA, flags, streamId);
+ writePaddingLength(headerBuffer, padding);
+
+ // Write frame header + data + padding for frames (0, n).
+ if (padding > 0) {
+ do {
+ ctx.write(headerBuffer.slice().retain(), promiseAggregator.newPromise());
+ ctx.write(data.readSlice(dataPerFrame).retain(), promiseAggregator.newPromise());
+ ctx.write(ZERO_BUFFER.slice(0, padding), promiseAggregator.newPromise());
+ } while (++i < numFramesLessOne);
+ } else {
+ do {
+ ctx.write(headerBuffer.slice().retain(), promiseAggregator.newPromise());
+ ctx.write(data.readSlice(dataPerFrame).retain(), promiseAggregator.newPromise());
+ } while (++i < numFramesLessOne);
}
- } while (!lastFrame);
+ // Skip enough bytes so we are on the unused 2nd half portion of the buffer.
+ headerBuffer.readerIndex(headerBuffer.writerIndex());
+ }
+
+ // Initialize the header portion of the frame for frame [n].
+ flags.endOfStream(endStream);
+ writeFrameHeaderInternal(headerBuffer,
+ data.readableBytes() + padding + flags.getPaddingPresenceFieldLength(),
+ DATA, flags, streamId);
+ writePaddingLength(headerBuffer, padding);
+
+ // Write frame header + data + padding for frame [n].
+ ByteBuf headerBuffer2 = headerBuffer; // make sure headerBuffer isn't released in the catch block.
+ headerBuffer = null;
+ ctx.write(headerBuffer2, promiseAggregator.newPromise());
+ ++i; // increment i so we don't release in the catch block.
+ ctx.write(data.readSlice(data.readableBytes()), promiseAggregator.newPromise());
+ if (padding > 0) { // Write the frame padding.
+ ctx.write(ZERO_BUFFER.slice(0, padding), promiseAggregator.newPromise());
+ }
return promiseAggregator.doneAllocatingPromises();
} catch (Throwable t) {
- if (needToReleaseHeaders) {
- header.release();
- }
- if (needToReleaseData) {
+ if (i < numFrames) {
data.release();
}
+ if (headerBuffer != null) {
+ headerBuffer.release();
+ }
return promiseAggregator.setFailure(t);
}
}
@@ -566,51 +581,4 @@ private static void verifyPingPayload(ByteBuf data) {
throw new IllegalArgumentException("Opaque data must be " + PING_FRAME_PAYLOAD_LENGTH + " bytes");
}
}
-
- /**
- * Utility class that manages the creation of frame header buffers for {@code DATA} frames. Attempts
- * to reuse the same buffer repeatedly when splitting data into multiple frames.
- */
- private static final class DataFrameHeader {
- private final int streamId;
- private final ByteBuf buffer;
- private final Http2Flags flags = new Http2Flags();
- private int prevData;
- private int prevPadding;
- private ByteBuf frameHeader;
-
- DataFrameHeader(ChannelHandlerContext ctx, int streamId) {
- // All padding will be put at the end, so in the worst case we need 3 headers:
- // a repeated no-padding frame of maxFrameSize, a frame that has part data and part
- // padding, and a frame that has the remainder of the padding.
- buffer = ctx.alloc().buffer(3 * DATA_FRAME_HEADER_LENGTH);
- this.streamId = streamId;
- }
-
- /**
- * Gets the frame header buffer configured for the current frame.
- */
- ByteBuf slice(int data, int padding, boolean endOfStream) {
- // Since we're reusing the current frame header whenever possible, check if anything changed
- // that requires a new header.
- if (data != prevData || padding != prevPadding
- || endOfStream != flags.endOfStream() || frameHeader == null) {
- // Update the header state.
- prevData = data;
- prevPadding = padding;
- flags.paddingPresent(padding > 0);
- flags.endOfStream(endOfStream);
- frameHeader = buffer.readSlice(DATA_FRAME_HEADER_LENGTH).writerIndex(0);
-
- int payloadLength = data + padding + flags.getPaddingPresenceFieldLength();
- writeFrameHeaderInternal(frameHeader, payloadLength, DATA, flags, streamId);
- writePaddingLength(frameHeader, padding);
- }
- return frameHeader.slice();
- }
-
- void release() {
- buffer.release();
- }
- }
}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
index 29cdc162bf5..297fa57bb33 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
@@ -197,7 +197,7 @@ public void largeDataFrameShouldMatch() throws Exception {
for (int framePadding : paddingCaptor.getAllValues()) {
totalReadPadding += framePadding;
}
- assertEquals(originalPadding, totalReadPadding);
+ assertEquals(originalPadding * paddingCaptor.getAllValues().size(), totalReadPadding);
}
@Test
| train | train | 2015-12-04T20:08:01 | 2015-06-02T00:40:41Z | louiscryan | val |
netty/netty/4457_4537 | netty/netty | netty/netty/4457 | netty/netty/4537 | [
"timestamp(timedelta=477.0, similarity=0.8917241508522582)"
] | 2fefb2f79c30bcda5b43815d64174aac900e740c | ff55d5a5e4fe2995319c152a172516a33c533a98 | [
"@Scottmitch @nmittler please have a look guys :)\n",
"I wonder if `HttpConversionUtil` should just group all multi-valued headers into single entries (i.e. not treat `Cookie` as special here). It wouldn't be invalid and I suspect would perform better end-to-end. \n\n@Scottmitch WDYT?\n",
"@nmittler +1\n",
"@... | [
"yikes ... was this missing? :)\n",
"Not new code, but why does the `if` statement above check for the value of `trailers`? Seems like checking the name equals `te` is sufficient, no?\n",
"final?\n",
"yikes :)\n",
"Are we guaranteed that we need to skip exactly 2 characters?\n",
"Maybe check for the excep... | 2015-12-04T19:47:00Z | [
"defect"
] | HttpConversionUtil - Incorrect conversion of Cookie header | Netty Version: master (latest snapshot: a6816bd59ef2923d82af5b7a7f3b722507a56a3a)
The browser (I used Chrome 46.0.2490.80 for testing) sent an HTTP/2 request containing multiple cookies to a netty service, the `HttpConversionUtil` generated multiple `Cookie` header entries in the HTTP/1.x request. According to [one stackoverflow post](http://stackoverflow.com/questions/16305814/are-multiple-cookie-headers-allowed-in-an-http-request), this is incorrect behaviour. The `HttpConversionUtil` should generate a single `Cookie` header containing all cookie values.
// cc @Scottmitch @nmittler - I could provide a patch, but I'm not too sure the best approach to take. Considerations about how to extract the cookies, and reformat them into a single header have to be made. If you could give me some tips, I might be able to come up with a PR.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java",
"common/src/main/java/io/netty/util/AsciiString.java",
"common/src/main/java/io/netty/util/ByteProcessor.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java",
"common/src/main/java/io/netty/util/AsciiString.java",
"common/src/main/java/io/netty/util/ByteProcessor.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java",
"common/src/test/java/io/netty/util/AsciiStringMemoryTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java
index 5b54fd61877..63bb5617872 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Headers.java
@@ -14,17 +14,16 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
+import static io.netty.handler.codec.http2.Http2Exception.connectionError;
+import static io.netty.util.AsciiString.CASE_SENSITIVE_HASHER;
+import static io.netty.util.AsciiString.isUpperCase;
import io.netty.handler.codec.CharSequenceValueConverter;
import io.netty.handler.codec.DefaultHeaders;
import io.netty.util.AsciiString;
import io.netty.util.ByteProcessor;
import io.netty.util.internal.PlatformDependent;
-import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
-import static io.netty.handler.codec.http2.Http2Exception.connectionError;
-import static io.netty.util.AsciiString.CASE_SENSITIVE_HASHER;
-import static io.netty.util.AsciiString.isUpperCase;
-
public class DefaultHttp2Headers
extends DefaultHeaders<CharSequence, CharSequence, Http2Headers> implements Http2Headers {
private static final ByteProcessor HTTP2_NAME_VALIDATOR_PROCESSOR = new ByteProcessor() {
@@ -113,6 +112,20 @@ public Http2Headers clear() {
return super.clear();
}
+ @Override
+ public boolean equals(Object o) {
+ if (!(o instanceof Http2Headers)) {
+ return false;
+ }
+
+ return equals((Http2Headers) o, CASE_SENSITIVE_HASHER);
+ }
+
+ @Override
+ public int hashCode() {
+ return hashCode(CASE_SENSITIVE_HASHER);
+ }
+
@Override
public Http2Headers method(CharSequence value) {
set(PseudoHeaderName.METHOD.value(), value);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java
index 87e2123e14b..c81166490d2 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpConversionUtil.java
@@ -14,6 +14,18 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.handler.codec.http.HttpScheme.HTTP;
+import static io.netty.handler.codec.http.HttpScheme.HTTPS;
+import static io.netty.handler.codec.http.HttpUtil.isAsteriskForm;
+import static io.netty.handler.codec.http.HttpUtil.isOriginForm;
+import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
+import static io.netty.handler.codec.http2.Http2Exception.connectionError;
+import static io.netty.handler.codec.http2.Http2Exception.streamError;
+import static io.netty.util.AsciiString.EMPTY_STRING;
+import static io.netty.util.ByteProcessor.FIND_SEMI_COLON;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+import static io.netty.util.internal.StringUtil.isNullOrEmpty;
+import static io.netty.util.internal.StringUtil.length;
import io.netty.handler.codec.http.DefaultFullHttpRequest;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpMessage;
@@ -35,18 +47,6 @@
import java.util.Iterator;
import java.util.Map.Entry;
-import static io.netty.handler.codec.http.HttpScheme.HTTP;
-import static io.netty.handler.codec.http.HttpScheme.HTTPS;
-import static io.netty.handler.codec.http.HttpUtil.isAsteriskForm;
-import static io.netty.handler.codec.http.HttpUtil.isOriginForm;
-import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
-import static io.netty.handler.codec.http2.Http2Exception.connectionError;
-import static io.netty.handler.codec.http2.Http2Exception.streamError;
-import static io.netty.util.AsciiString.EMPTY_STRING;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
-import static io.netty.util.internal.StringUtil.isNullOrEmpty;
-import static io.netty.util.internal.StringUtil.length;
-
/**
* Provides utility methods and constants for the HTTP/2 to HTTP conversion
*/
@@ -330,9 +330,33 @@ public static void toHttp2Headers(HttpHeaders inHeaders, Http2Headers out) throw
final AsciiString aName = AsciiString.of(entry.getKey()).toLowerCase();
if (!HTTP_TO_HTTP2_HEADER_BLACKLIST.contains(aName)) {
// https://tools.ietf.org/html/rfc7540#section-8.1.2.2 makes a special exception for TE
- if (!aName.contentEqualsIgnoreCase(HttpHeaderNames.TE) ||
- AsciiString.contentEqualsIgnoreCase(entry.getValue(), HttpHeaderValues.TRAILERS)) {
- out.add(aName, AsciiString.of(entry.getValue()));
+ if (aName.contentEqualsIgnoreCase(HttpHeaderNames.TE) &&
+ !AsciiString.contentEqualsIgnoreCase(entry.getValue(), HttpHeaderValues.TRAILERS)) {
+ throw new IllegalArgumentException("Invalid value for " + HttpHeaderNames.TE + ": " +
+ entry.getValue());
+ }
+ if (aName.contentEqualsIgnoreCase(HttpHeaderNames.COOKIE)) {
+ AsciiString value = AsciiString.of(entry.getValue());
+ // split up cookies to allow for better compression
+ // https://tools.ietf.org/html/rfc7540#section-8.1.2.5
+ int index = value.forEachByte(FIND_SEMI_COLON);
+ if (index != -1) {
+ int start = 0;
+ do {
+ out.add(HttpHeaderNames.COOKIE, value.subSequence(start, index, false));
+ // skip 2 characters "; " (see https://tools.ietf.org/html/rfc6265#section-4.2.1)
+ start = index + 2;
+ } while (start < value.length() &&
+ (index = value.forEachByte(start, value.length() - start, FIND_SEMI_COLON)) != -1);
+ if (start >= value.length()) {
+ throw new IllegalArgumentException("cookie value is of unexpected format: " + value);
+ }
+ out.add(HttpHeaderNames.COOKIE, value.subSequence(start, value.length(), false));
+ } else {
+ out.add(HttpHeaderNames.COOKIE, value);
+ }
+ } else {
+ out.add(aName, entry.getValue());
}
}
}
@@ -449,7 +473,15 @@ public void translate(Entry<CharSequence, CharSequence> entry) throws Http2Excep
throw streamError(streamId, PROTOCOL_ERROR,
"Invalid HTTP/2 header '%s' encountered in translation to HTTP/1.x", name);
}
- output.add(AsciiString.of(name), AsciiString.of(value));
+ if (HttpHeaderNames.COOKIE.equals(name)) {
+ // combine the cookie values into 1 header entry.
+ // https://tools.ietf.org/html/rfc7540#section-8.1.2.5
+ String existingCookie = output.get(HttpHeaderNames.COOKIE);
+ output.set(HttpHeaderNames.COOKIE,
+ (existingCookie != null) ? (existingCookie + "; " + value) : value);
+ } else {
+ output.add(name, value);
+ }
}
}
}
diff --git a/common/src/main/java/io/netty/util/AsciiString.java b/common/src/main/java/io/netty/util/AsciiString.java
index 38ce3243e8e..ceb5bf0ca59 100644
--- a/common/src/main/java/io/netty/util/AsciiString.java
+++ b/common/src/main/java/io/netty/util/AsciiString.java
@@ -276,7 +276,7 @@ public int forEachByte(int index, int length, ByteProcessor visitor) throws Exce
}
private int forEachByte0(int index, int length, ByteProcessor visitor) throws Exception {
- final int len = offset + length;
+ final int len = offset + index + length;
for (int i = offset + index; i < len; ++i) {
if (!visitor.process(value[i])) {
return i - offset;
diff --git a/common/src/main/java/io/netty/util/ByteProcessor.java b/common/src/main/java/io/netty/util/ByteProcessor.java
index 2847813c6f0..ef5929e8b64 100644
--- a/common/src/main/java/io/netty/util/ByteProcessor.java
+++ b/common/src/main/java/io/netty/util/ByteProcessor.java
@@ -120,6 +120,16 @@ public boolean process(byte value) throws Exception {
}
};
+ /**
+ * Aborts on a {@code CR (';')}.
+ */
+ ByteProcessor FIND_SEMI_COLON = new ByteProcessor() {
+ @Override
+ public boolean process(byte value) throws Exception {
+ return value != ';';
+ }
+ };
+
/**
* @return {@code true} if the processor wants to continue the loop and handle the next byte in the buffer.
* {@code false} if the processor wants to stop handling bytes and abort the loop.
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
index 55d9404fc72..f8727b0dc2b 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandlerTest.java
@@ -141,6 +141,27 @@ public void testHeadersOnlyRequest() throws Exception {
verifyHeadersOnly(http2Headers, writePromise, clientChannel.writeAndFlush(request, writePromise));
}
+ @Test
+ public void testMultipleCookieEntriesAreCombined() throws Exception {
+ bootstrapEnv(2, 1, 0);
+ final FullHttpRequest request = new DefaultFullHttpRequest(HTTP_1_1, GET,
+ "http://my-user_name@www.example.org:5555/example");
+ final HttpHeaders httpHeaders = request.headers();
+ httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 5);
+ httpHeaders.set(HttpHeaderNames.HOST, "my-user_name@www.example.org:5555");
+ httpHeaders.set(HttpConversionUtil.ExtensionHeaderNames.SCHEME.text(), "http");
+ httpHeaders.set(HttpHeaderNames.COOKIE, "a=b; c=d; e=f");
+ final Http2Headers http2Headers =
+ new DefaultHttp2Headers().method(new AsciiString("GET")).path(new AsciiString("/example"))
+ .authority(new AsciiString("www.example.org:5555")).scheme(new AsciiString("http"))
+ .add(HttpHeaderNames.COOKIE, "a=b")
+ .add(HttpHeaderNames.COOKIE, "c=d")
+ .add(HttpHeaderNames.COOKIE, "e=f");
+
+ ChannelPromise writePromise = newPromise();
+ verifyHeadersOnly(http2Headers, writePromise, clientChannel.writeAndFlush(request, writePromise));
+ }
+
@Test
public void testOriginFormRequestTargetHandled() throws Exception {
bootstrapEnv(2, 1, 0);
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
index c057fdda8e5..9d355a5b3d0 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
@@ -158,6 +158,75 @@ public void run() {
}
}
+ @Test
+ public void clientRequestSingleHeaderCookieSplitIntoMultipleEntries() throws Exception {
+ boostrapEnv(1, 1, 1);
+ final FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET,
+ "/some/path/resource2", true);
+ try {
+ HttpHeaders httpHeaders = request.headers();
+ httpHeaders.set(HttpConversionUtil.ExtensionHeaderNames.SCHEME.text(), "https");
+ httpHeaders.set(HttpHeaderNames.HOST, "example.org");
+ httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
+ httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, 0);
+ httpHeaders.set(HttpHeaderNames.COOKIE, "a=b; c=d; e=f");
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("GET")).
+ scheme(new AsciiString("https")).authority(new AsciiString("example.org"))
+ .path(new AsciiString("/some/path/resource2"))
+ .add(HttpHeaderNames.COOKIE, "a=b")
+ .add(HttpHeaderNames.COOKIE, "c=d")
+ .add(HttpHeaderNames.COOKIE, "e=f");
+ runInChannel(clientChannel, new Http2Runnable() {
+ @Override
+ public void run() {
+ frameWriter.writeHeaders(ctxClient(), 3, http2Headers, 0, true, newPromiseClient());
+ ctxClient().flush();
+ }
+ });
+ awaitRequests();
+ ArgumentCaptor<FullHttpMessage> requestCaptor = ArgumentCaptor.forClass(FullHttpMessage.class);
+ verify(serverListener).messageReceived(requestCaptor.capture());
+ capturedRequests = requestCaptor.getAllValues();
+ assertEquals(request, capturedRequests.get(0));
+ } finally {
+ request.release();
+ }
+ }
+
+ @Test
+ public void clientRequestSingleHeaderCookieSplitIntoMultipleEntries2() throws Exception {
+ boostrapEnv(1, 1, 1);
+ final FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET,
+ "/some/path/resource2", true);
+ try {
+ HttpHeaders httpHeaders = request.headers();
+ httpHeaders.set(HttpConversionUtil.ExtensionHeaderNames.SCHEME.text(), "https");
+ httpHeaders.set(HttpHeaderNames.HOST, "example.org");
+ httpHeaders.setInt(HttpConversionUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
+ httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, 0);
+ httpHeaders.set(HttpHeaderNames.COOKIE, "a=b; c=d; e=f");
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(new AsciiString("GET")).
+ scheme(new AsciiString("https")).authority(new AsciiString("example.org"))
+ .path(new AsciiString("/some/path/resource2"))
+ .add(HttpHeaderNames.COOKIE, "a=b; c=d")
+ .add(HttpHeaderNames.COOKIE, "e=f");
+ runInChannel(clientChannel, new Http2Runnable() {
+ @Override
+ public void run() {
+ frameWriter.writeHeaders(ctxClient(), 3, http2Headers, 0, true, newPromiseClient());
+ ctxClient().flush();
+ }
+ });
+ awaitRequests();
+ ArgumentCaptor<FullHttpMessage> requestCaptor = ArgumentCaptor.forClass(FullHttpMessage.class);
+ verify(serverListener).messageReceived(requestCaptor.capture());
+ capturedRequests = requestCaptor.getAllValues();
+ assertEquals(request, capturedRequests.get(0));
+ } finally {
+ request.release();
+ }
+ }
+
@Test
public void clientRequestSingleHeaderNonAsciiShouldThrow() throws Exception {
boostrapEnv(1, 1, 1);
diff --git a/common/src/test/java/io/netty/util/AsciiStringMemoryTest.java b/common/src/test/java/io/netty/util/AsciiStringMemoryTest.java
index 808ace76a7e..0f2468aed6a 100644
--- a/common/src/test/java/io/netty/util/AsciiStringMemoryTest.java
+++ b/common/src/test/java/io/netty/util/AsciiStringMemoryTest.java
@@ -17,6 +17,7 @@
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotEquals;
+import io.netty.util.ByteProcessor.IndexOfProcessor;
import java.util.Random;
import java.util.concurrent.atomic.AtomicReference;
@@ -102,6 +103,18 @@ public boolean process(byte value) throws Exception {
assertEquals(bAsciiString.length(), bCount.get().intValue());
}
+ @Test
+ public void forEachWithIndexEndTest() throws Exception {
+ assertNotEquals(-1, aAsciiString.forEachByte(aAsciiString.length() - 1,
+ 1, new IndexOfProcessor(aAsciiString.byteAt(aAsciiString.length() - 1))));
+ }
+
+ @Test
+ public void forEachWithIndexBeginTest() throws Exception {
+ assertNotEquals(-1, aAsciiString.forEachByte(0,
+ 1, new IndexOfProcessor(aAsciiString.byteAt(0))));
+ }
+
@Test
public void forEachDescTest() throws Exception {
final AtomicReference<Integer> aCount = new AtomicReference<Integer>(0);
@@ -128,6 +141,18 @@ public boolean process(byte value) throws Exception {
assertEquals(bAsciiString.length(), bCount.get().intValue());
}
+ @Test
+ public void forEachDescWithIndexEndTest() throws Exception {
+ assertNotEquals(-1, bAsciiString.forEachByteDesc(bAsciiString.length() - 1,
+ 1, new IndexOfProcessor(bAsciiString.byteAt(bAsciiString.length() - 1))));
+ }
+
+ @Test
+ public void forEachDescWithIndexBeginTest() throws Exception {
+ assertNotEquals(-1, bAsciiString.forEachByteDesc(0,
+ 1, new IndexOfProcessor(bAsciiString.byteAt(0))));
+ }
+
@Test
public void subSequenceTest() {
final int start = 12;
| train | train | 2015-12-07T11:40:01 | 2015-11-09T05:58:54Z | blucas | val |
netty/netty/4505_4552 | netty/netty | netty/netty/4505 | netty/netty/4552 | [
"timestamp(timedelta=39.0, similarity=0.8664136669792186)"
] | 22ad5c502fe558662f7fc2b377a8dfb559e4df9f | c62538dfa6bfb3fc52cb3fc55a9465d8a21706a1 | [
"[This is the Workaround](https://github.com/saxsys/SynchronizeFX/blob/9789b889a160051c4f74097446a4bc563f2beabb/transmitter/netty-transmitter/src/main/java/de/saxsys/synchronizefx/netty/websockets/WhiteSpaceInPathWebSocketClientHandshaker13.java) i use. I basically use [getRawPath()](http://docs.oracle.com/javase/8... | [
"Extract the return value of `getQuery()`\n",
"Fixed...\n"
] | 2015-12-10T08:45:31Z | [
"defect"
] | Spaces in WebSocket URIs not handled correctly | Websocket connections to URIs that have spaces in the path can not be established in Netty 4.0.21.
The problematic code seams to be [this line](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java#L129). If the URI contains a space encoded as %20, the [getPath](http://docs.oracle.com/javase/8/docs/api/java/net/URI.html#getPath--) Method decodes this back to a normal space. This decoded space is then send in the HTTP request at some deeper layer without properly reencoding it.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java",
"codec-http/... | [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java",
"codec-http/... | [
"codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java",
"codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java",
"codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java",
... | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java
index d67b85f3798..42ef07d704f 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker.java
@@ -405,4 +405,17 @@ public ChannelFuture close(Channel channel, CloseWebSocketFrame frame, ChannelPr
}
return channel.writeAndFlush(frame, promise);
}
+
+ /**
+ * Return the constructed raw path for the give {@link URI}.
+ */
+ static String rawPath(URI wsURL) {
+ String path = wsURL.getRawPath();
+ String query = wsURL.getQuery();
+ if (query != null && !query.isEmpty()) {
+ path = path + '?' + query;
+ }
+
+ return path == null || path.isEmpty() ? "/" : path;
+ }
}
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java
index 4ec7833e666..7ebc4bb7085 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00.java
@@ -123,14 +123,7 @@ protected FullHttpRequest newHandshakeRequest() {
// Get path
URI wsURL = uri();
- String path = wsURL.getPath();
- if (wsURL.getQuery() != null && !wsURL.getQuery().isEmpty()) {
- path = wsURL.getPath() + '?' + wsURL.getQuery();
- }
-
- if (path == null || path.isEmpty()) {
- path = "/";
- }
+ String path = rawPath(wsURL);
// Format request
FullHttpRequest request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, path);
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java
index 6db3d4c5013..96d503d1a04 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07.java
@@ -92,14 +92,7 @@ public WebSocketClientHandshaker07(URI webSocketURL, WebSocketVersion version, S
protected FullHttpRequest newHandshakeRequest() {
// Get path
URI wsURL = uri();
- String path = wsURL.getPath();
- if (wsURL.getQuery() != null && !wsURL.getQuery().isEmpty()) {
- path = wsURL.getPath() + '?' + wsURL.getQuery();
- }
-
- if (path == null || path.isEmpty()) {
- path = "/";
- }
+ String path = rawPath(wsURL);
// Get 16 bit nonce and base 64 encode it
byte[] nonce = WebSocketUtil.randomBytes(16);
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java
index d2eee599cb9..d30d04bb8d7 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08.java
@@ -92,14 +92,7 @@ public WebSocketClientHandshaker08(URI webSocketURL, WebSocketVersion version, S
protected FullHttpRequest newHandshakeRequest() {
// Get path
URI wsURL = uri();
- String path = wsURL.getPath();
- if (wsURL.getQuery() != null && !wsURL.getQuery().isEmpty()) {
- path = wsURL.getPath() + '?' + wsURL.getQuery();
- }
-
- if (path == null || path.isEmpty()) {
- path = "/";
- }
+ String path = rawPath(wsURL);
// Get 16 bit nonce and base 64 encode it
byte[] nonce = WebSocketUtil.randomBytes(16);
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java
index a3d9b9c272a..ef88adc6c12 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13.java
@@ -92,14 +92,7 @@ public WebSocketClientHandshaker13(URI webSocketURL, WebSocketVersion version, S
protected FullHttpRequest newHandshakeRequest() {
// Get path
URI wsURL = uri();
- String path = wsURL.getPath();
- if (wsURL.getQuery() != null && !wsURL.getQuery().isEmpty()) {
- path = wsURL.getPath() + '?' + wsURL.getQuery();
- }
-
- if (path == null || path.isEmpty()) {
- path = "/";
- }
+ String path = rawPath(wsURL);
// Get 16 bit nonce and base 64 encode it
byte[] nonce = WebSocketUtil.randomBytes(16);
| diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java
new file mode 100644
index 00000000000..100abf694d4
--- /dev/null
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker00Test.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.http.websocketx;
+
+import java.net.URI;
+
+public class WebSocketClientHandshaker00Test extends WebSocketClientHandshakerTest {
+ @Override
+ protected WebSocketClientHandshaker newHandshaker(URI uri) {
+ return new WebSocketClientHandshaker00(uri, WebSocketVersion.V00, null, null, 1024);
+ }
+}
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java
new file mode 100644
index 00000000000..168a2458d17
--- /dev/null
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker07Test.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.http.websocketx;
+
+import java.net.URI;
+
+public class WebSocketClientHandshaker07Test extends WebSocketClientHandshakerTest {
+ @Override
+ protected WebSocketClientHandshaker newHandshaker(URI uri) {
+ return new WebSocketClientHandshaker07(uri, WebSocketVersion.V07, null, false, null, 1024);
+ }
+}
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java
new file mode 100644
index 00000000000..249bd958fb0
--- /dev/null
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker08Test.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.http.websocketx;
+
+import java.net.URI;
+
+public class WebSocketClientHandshaker08Test extends WebSocketClientHandshakerTest {
+ @Override
+ protected WebSocketClientHandshaker newHandshaker(URI uri) {
+ return new WebSocketClientHandshaker07(uri, WebSocketVersion.V08, null, false, null, 1024);
+ }
+}
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java
new file mode 100644
index 00000000000..2bc2e691b22
--- /dev/null
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshaker13Test.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.http.websocketx;
+
+import java.net.URI;
+
+public class WebSocketClientHandshaker13Test extends WebSocketClientHandshakerTest {
+ @Override
+ protected WebSocketClientHandshaker newHandshaker(URI uri) {
+ return new WebSocketClientHandshaker13(uri, WebSocketVersion.V13, null, false, null, 1024);
+ }
+}
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java
new file mode 100644
index 00000000000..e45c92349eb
--- /dev/null
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/websocketx/WebSocketClientHandshakerTest.java
@@ -0,0 +1,39 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.http.websocketx;
+
+import io.netty.handler.codec.http.FullHttpRequest;
+import org.junit.Test;
+
+import java.net.URI;
+
+import static org.junit.Assert.assertEquals;
+
+public abstract class WebSocketClientHandshakerTest {
+ protected abstract WebSocketClientHandshaker newHandshaker(URI uri);
+
+ @Test
+ public void testRawPath() {
+ URI uri = URI.create("ws://localhost:9999/path%20with%20ws");
+ WebSocketClientHandshaker handshaker = newHandshaker(uri);
+ FullHttpRequest request = handshaker.newHandshakeRequest();
+ try {
+ assertEquals("/path%20with%20ws", request.getUri());
+ } finally {
+ request.release();
+ }
+ }
+}
| train | train | 2015-12-10T08:58:09 | 2015-11-23T11:39:57Z | rbi | val |
netty/netty/4564_4565 | netty/netty | netty/netty/4564 | netty/netty/4565 | [
"timestamp(timedelta=14.0, similarity=0.8830104167249984)"
] | 6c0fef133baa4afce11953110a6947732d87ec8e | e2f3fb84275bf47fb0296f6d18eb80b59891518f | [
"Fixed by https://github.com/netty/netty/pull/4565\n"
] | [] | 2015-12-12T17:40:25Z | [
"defect"
] | AsciiString.contentEqualsIgnoreCase does not work correctly for AsciiString comparison | Netty Version: master (6c0fef133baa4afce11953110a6947732d87ec8e)
`AsciiString.contentEqualsIgnoreCase` fails when checking for equality between two `AsciiString` objects. For example
``` java
// returns true, should return false
new AsciiString("foo").contentEqualsIgnoreCase(new AsciiString("bar"));
```
The issue is due to [the following line](https://github.com/netty/netty/blob/master/common/src/main/java/io/netty/util/AsciiString.java#L533)
I think I've tracked this down to the following commit: https://github.com/netty/netty/commit/ba6ce5449ee852b782dde9a11933f6a09b123e22
Pull request to follow
| [
"common/src/main/java/io/netty/util/AsciiString.java"
] | [
"common/src/main/java/io/netty/util/AsciiString.java"
] | [
"common/src/test/java/io/netty/util/AsciiStringCharacterTest.java"
] | diff --git a/common/src/main/java/io/netty/util/AsciiString.java b/common/src/main/java/io/netty/util/AsciiString.java
index ceb5bf0ca59..8baf6851c4a 100644
--- a/common/src/main/java/io/netty/util/AsciiString.java
+++ b/common/src/main/java/io/netty/util/AsciiString.java
@@ -530,7 +530,7 @@ public boolean contentEqualsIgnoreCase(CharSequence string) {
if (string.getClass() == AsciiString.class) {
AsciiString rhs = (AsciiString) string;
for (int i = arrayOffset(), j = rhs.arrayOffset(); i < length(); ++i, ++j) {
- if (!equalsIgnoreCase(value[i], value[j])) {
+ if (!equalsIgnoreCase(value[i], rhs.value[j])) {
return false;
}
}
| diff --git a/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java b/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java
index cbec28140d3..08f2bab2c21 100644
--- a/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java
+++ b/common/src/test/java/io/netty/util/AsciiStringCharacterTest.java
@@ -240,6 +240,16 @@ public void testEqualsIgnoreCase() {
assertThat(AsciiString.contentEqualsIgnoreCase(null, "foo"), is(false));
assertThat(AsciiString.contentEqualsIgnoreCase("bar", null), is(false));
assertThat(AsciiString.contentEqualsIgnoreCase("FoO", "fOo"), is(true));
+
+ // Test variations (Ascii + String, Ascii + Ascii, String + Ascii)
+ assertThat(AsciiString.contentEqualsIgnoreCase(new AsciiString("FoO"), "fOo"), is(true));
+ assertThat(AsciiString.contentEqualsIgnoreCase(new AsciiString("FoO"), new AsciiString("fOo")), is(true));
+ assertThat(AsciiString.contentEqualsIgnoreCase("FoO", new AsciiString("fOo")), is(true));
+
+ // Test variations (Ascii + String, Ascii + Ascii, String + Ascii)
+ assertThat(AsciiString.contentEqualsIgnoreCase(new AsciiString("FoO"), "bAr"), is(false));
+ assertThat(AsciiString.contentEqualsIgnoreCase(new AsciiString("FoO"), new AsciiString("bAr")), is(false));
+ assertThat(AsciiString.contentEqualsIgnoreCase("FoO", new AsciiString("bAr")), is(false));
}
@Test
| train | train | 2015-12-11T19:47:37 | 2015-12-12T17:34:36Z | blucas | val |
netty/netty/4587_4599 | netty/netty | netty/netty/4587 | netty/netty/4599 | [
"timestamp(timedelta=15.0, similarity=0.933704597804567)"
] | f750d6e36c80e88fb302c99b5b7413e5649e6738 | eebe5129c41d5d4b2deedd94cfc5cb0232788b1c | [
"@nmittler - FYI\n",
"@Scottmitch good find! Any thoughts on a fix?\n",
"@nmittler - Yes. See https://github.com/netty/netty/pull/4599\n"
] | [
"Should `streamableBytes` just be `pendingBytes` or something like that. `streamableBytes` is a derived value based on the stream window.\n",
"spelling: `writing`\n",
"what if `windowSize == 0 && pendingBytes == 0`? Seems like we would want to write an empty frame.\n",
"done\n",
"`pendingBytes == 0` shoul... | 2015-12-18T19:01:39Z | [
"defect"
] | HTTP/2 DefaultHttp2RemoteFlowController Stream writability notification broken | `DefaultHttp2RemoteFlowController.ListenerWritabilityMonitor` no longer reliably detects when a stream's writability change occurs. `ListenerWritabilityMonitor` was implemented to avoid duplicating iteration over all streams when possible and instead was relying on the `PriorityStreamByteDistributor` to call `write` for each stream during its iteration process. However the new `StreamByteDistributor` classes do not do an iteration over all active streams and so this assumption is now invalid.
The impact is isolated to stream writability change notifications and has no impact unless you explicitly add a listener, and use an allocator other than `PriorityStreamByteDistributor`.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/PriorityStreamByteDistributor.java",
"codec-http2/src/main/java/io/netty/handler... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/PriorityStreamByteDistributor.java",
"codec-http2/src/main/java/io/netty/handler... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/PriorityStreamByteDistributorTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorFlowControllerTest.java",
"co... | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index c3963c48b4d..a39b10af324 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -299,7 +299,7 @@ private final class DefaultState extends AbstractState {
}
@Override
- int windowSize() {
+ public int windowSize() {
return window;
}
@@ -389,11 +389,6 @@ int incrementStreamWindow(int delta) throws Http2Exception {
return window;
}
- @Override
- public int streamableBytes() {
- return max(0, min(pendingBytes, window));
- }
-
/**
* Returns the maximum writable window (minimum of the stream and connection windows).
*/
@@ -402,7 +397,7 @@ private int writableWindow() {
}
@Override
- int pendingBytes() {
+ public int pendingBytes() {
return pendingBytes;
}
@@ -514,7 +509,7 @@ private final class ReducedState extends AbstractState {
}
@Override
- int windowSize() {
+ public int windowSize() {
return 0;
}
@@ -524,12 +519,7 @@ int initialWindowSize() {
}
@Override
- public int streamableBytes() {
- return 0;
- }
-
- @Override
- int pendingBytes() {
+ public int pendingBytes() {
return 0;
}
@@ -604,13 +594,6 @@ final void markWritability(boolean isWritable) {
this.markedWritable = isWritable;
}
- @Override
- public final boolean isWriteAllowed() {
- return windowSize() >= 0;
- }
-
- abstract int windowSize();
-
abstract int initialWindowSize();
/**
@@ -620,11 +603,6 @@ public final boolean isWriteAllowed() {
*/
abstract int writeAllocatedBytes(int allocated);
- /**
- * Get the number of bytes pending to be written.
- */
- abstract int pendingBytes();
-
/**
* Any operations that may be pending are cleared and the status of these operations is failed.
*/
@@ -651,20 +629,11 @@ public final boolean isWriteAllowed() {
*/
private abstract class WritabilityMonitor {
private long totalPendingBytes;
+ private final Writer writer;
- /**
- * Increment all windows by {@code newWindowSize} amount, and write data if streams change from not writable
- * to writable.
- * @param newWindowSize The new window size.
- * @throws Http2Exception If an overflow occurs or an exception on write occurs.
- */
- public abstract void initialWindowSize(int newWindowSize) throws Http2Exception;
-
- /**
- * Attempt to allocate bytes to streams which have frames queued.
- * @throws Http2Exception If a write occurs and an exception happens in the write operation.
- */
- public abstract void writePendingBytes() throws Http2Exception;
+ protected WritabilityMonitor(Writer writer) {
+ this.writer = writer;
+ }
/**
* Called when the writability of the underlying channel changes.
@@ -719,7 +688,7 @@ public final boolean isWritable(AbstractState state) {
return isWritableConnection() && state.windowSize() - state.pendingBytes() > 0;
}
- protected final void writePendingBytes(Writer writer) throws Http2Exception {
+ protected final void writePendingBytes() throws Http2Exception {
int bytesToWrite = writableBytes();
// Make sure we always write at least once, regardless if we have bytesToWrite or not.
@@ -733,7 +702,7 @@ protected final void writePendingBytes(Writer writer) throws Http2Exception {
}
}
- protected final boolean initialWindowSize(int newWindowSize, Writer writer) throws Http2Exception {
+ protected void initialWindowSize(int newWindowSize) throws Http2Exception {
if (newWindowSize < 0) {
throw new IllegalArgumentException("Invalid initial window size: " + newWindowSize);
}
@@ -750,10 +719,8 @@ public boolean visit(Http2Stream stream) throws Http2Exception {
if (delta > 0) {
// The window size increased, send any pending frames for all streams.
- writePendingBytes(writer);
- return false;
+ writePendingBytes();
}
- return true;
}
protected final boolean isWritableConnection() {
@@ -765,21 +732,13 @@ protected final boolean isWritableConnection() {
* Provides no notification or tracking of writablity changes.
*/
private final class DefaultWritabilityMonitor extends WritabilityMonitor {
- private final Writer writer = new StreamByteDistributor.Writer() {
- @Override
- public void write(Http2Stream stream, int numBytes) {
- state(stream).writeAllocatedBytes(numBytes);
- }
- };
-
- @Override
- public void writePendingBytes() throws Http2Exception {
- writePendingBytes(writer);
- }
-
- @Override
- public void initialWindowSize(int newWindowSize) throws Http2Exception {
- initialWindowSize(newWindowSize, writer);
+ DefaultWritabilityMonitor() {
+ super(new StreamByteDistributor.Writer() {
+ @Override
+ public void write(Http2Stream stream, int numBytes) {
+ state(stream).writeAllocatedBytes(numBytes);
+ }
+ });
}
}
@@ -803,32 +762,21 @@ public boolean visit(Http2Stream stream) throws Http2Exception {
return true;
}
};
- private final Writer initialWindowSizeWriter = new StreamByteDistributor.Writer() {
- @Override
- public void write(Http2Stream stream, int numBytes) {
- AbstractState state = state(stream);
- writeAllocatedBytes(state, numBytes);
- if (isWritable(state) != state.markWritability()) {
- notifyWritabilityChanged(state);
- }
- }
- };
- private final Writer writeAllocatedBytesWriter = new StreamByteDistributor.Writer() {
- @Override
- public void write(Http2Stream stream, int numBytes) {
- writeAllocatedBytes(state(stream), numBytes);
- }
- };
- ListenerWritabilityMonitor(Listener listener) {
+ ListenerWritabilityMonitor(final Listener listener) {
+ super(new StreamByteDistributor.Writer() {
+ @Override
+ public void write(Http2Stream stream, int numBytes) {
+ AbstractState state = state(stream);
+ int written = state.writeAllocatedBytes(numBytes);
+ if (written != -1) {
+ listener.streamWritten(state.stream(), written);
+ }
+ }
+ });
this.listener = listener;
}
- @Override
- public void writePendingBytes() throws Http2Exception {
- writePendingBytes(writeAllocatedBytesWriter);
- }
-
@Override
public void incrementWindowSize(AbstractState state, int delta) throws Http2Exception {
super.incrementWindowSize(state, delta);
@@ -842,13 +790,12 @@ public void incrementWindowSize(AbstractState state, int delta) throws Http2Exce
}
@Override
- public void initialWindowSize(int newWindowSize) throws Http2Exception {
- if (initialWindowSize(newWindowSize, initialWindowSizeWriter)) {
- if (isWritableConnection()) {
- // If the write operation does not occur we still need to check all streams because they
- // may have transitioned from writable to not writable.
- checkAllWritabilityChanged();
- }
+ protected void initialWindowSize(int newWindowSize) throws Http2Exception {
+ super.initialWindowSize(newWindowSize);
+ if (isWritableConnection()) {
+ // If the write operation does not occur we still need to check all streams because they
+ // may have transitioned from writable to not writable.
+ checkAllWritabilityChanged();
}
}
@@ -897,12 +844,5 @@ private void checkAllWritabilityChanged() throws Http2Exception {
connectionState.markWritability(isWritableConnection());
connection.forEachActiveStream(checkStreamWritabilityVisitor);
}
-
- private void writeAllocatedBytes(AbstractState state, int numBytes) {
- int written = state.writeAllocatedBytes(numBytes);
- if (written != -1) {
- listener.streamWritten(state.stream(), written);
- }
- }
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index c8d9877ab85..90051fc1b78 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -30,6 +30,8 @@
import static io.netty.buffer.Unpooled.unmodifiableBuffer;
import static io.netty.buffer.Unpooled.unreleasableBuffer;
import static io.netty.util.CharsetUtil.UTF_8;
+import static java.lang.Math.max;
+import static java.lang.Math.min;
/**
* Constants and utility method used for encoding/decoding HTTP2 frames.
@@ -189,6 +191,13 @@ public static void writeFrameHeader(ByteBuf out, int payloadLength, byte type,
writeFrameHeaderInternal(out, payloadLength, type, flags, streamId);
}
+ /**
+ * Calculate the amount of bytes that can be sent by {@code state}. The lower bound is {@code 0}.
+ */
+ public static int streamableBytes(StreamByteDistributor.StreamState state) {
+ return max(0, min(state.pendingBytes(), state.windowSize()));
+ }
+
static void writeFrameHeaderInternal(ByteBuf out, int payloadLength, byte type,
Http2Flags flags, int streamId) {
out.writeMedium(payloadLength);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/PriorityStreamByteDistributor.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/PriorityStreamByteDistributor.java
index 04ab3e73dd9..365cff7f73e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/PriorityStreamByteDistributor.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/PriorityStreamByteDistributor.java
@@ -17,6 +17,7 @@
import java.util.Arrays;
+import static io.netty.handler.codec.http2.Http2CodecUtil.streamableBytes;
import static io.netty.handler.codec.http2.Http2Error.INTERNAL_ERROR;
import static io.netty.handler.codec.http2.Http2Exception.connectionError;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
@@ -78,7 +79,7 @@ public void onPriorityTreeParentChanging(Http2Stream stream, Http2Stream newPare
@Override
public void updateStreamableBytes(StreamState streamState) {
- state(streamState.stream()).updateStreamableBytes(streamState.streamableBytes(),
+ state(streamState.stream()).updateStreamableBytes(streamableBytes(streamState),
streamState.hasFrame());
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/StreamByteDistributor.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/StreamByteDistributor.java
index 844e941f2df..9659975d830 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/StreamByteDistributor.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/StreamByteDistributor.java
@@ -32,13 +32,12 @@ interface StreamState {
Http2Stream stream();
/**
- * Returns the number of pending bytes for this node that will fit within the stream flow
- * control window. This is used for the priority algorithm to determine the aggregate number
- * of bytes that can be written at each node. Each node only takes into account its stream
- * window so that when a change occurs to the connection window, these values need not
- * change (i.e. no tree traversal is required).
+ * Get the amount of bytes this stream has pending to send. The actual amount written must not exceed
+ * {@link #windowSize()}!
+ * @return The amount of bytes this stream has pending to send.
+ * @see {@link #io.netty.handler.codec.http2.Http2CodecUtil.streamableBytes(StreamState)}
*/
- int streamableBytes();
+ int pendingBytes();
/**
* Indicates whether or not there are frames pending for this stream.
@@ -46,11 +45,15 @@ interface StreamState {
boolean hasFrame();
/**
- * Determine if a write operation is allowed for this stream. This will typically take into account the
- * stream's flow controller being non-negative.
- * @return {@code true} if a write is allowed on this stream. {@code false} otherwise.
+ * The size (in bytes) of the stream's flow control window. The amount written must not exceed this amount!
+ * <p>A {@link StreamByteDistributor} needs to know the stream's window size in order to avoid allocating bytes
+ * if the window size is negative. The window size being {@code 0} may also be significant to determine when if
+ * an stream has been given a chance to write an empty frame, and also enables optimizations like not writing
+ * empty frames in some situations (don't write headers until data can also be written).
+ * @return the size of the stream's flow control window.
+ * @see {@link #io.netty.handler.codec.http2.Http2CodecUtil.streamableBytes(StreamState)}
*/
- boolean isWriteAllowed();
+ int windowSize();
}
/**
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java
index 5ffd8171149..9b3cd2daded 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java
@@ -14,15 +14,16 @@
*/
package io.netty.handler.codec.http2;
+import java.util.ArrayDeque;
+import java.util.Deque;
+
+import static io.netty.handler.codec.http2.Http2CodecUtil.streamableBytes;
import static io.netty.handler.codec.http2.Http2Error.INTERNAL_ERROR;
import static io.netty.handler.codec.http2.Http2Exception.connectionError;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static java.lang.Math.max;
import static java.lang.Math.min;
-import java.util.ArrayDeque;
-import java.util.Deque;
-
/**
* A {@link StreamByteDistributor} that ignores stream priority and uniformly allocates bytes to all
* streams. This class uses a minimum chunk size that will be allocated to each stream. While
@@ -77,8 +78,9 @@ public void minAllocationChunk(int minAllocationChunk) {
@Override
public void updateStreamableBytes(StreamState streamState) {
- State state = state(streamState.stream());
- state.updateStreamableBytes(streamState.streamableBytes(), streamState.hasFrame());
+ state(streamState.stream()).updateStreamableBytes(streamableBytes(streamState),
+ streamState.hasFrame(),
+ streamState.windowSize());
}
@Override
@@ -119,6 +121,13 @@ private State state(Http2Stream stream) {
return checkNotNull(stream, "stream").getProperty(stateKey);
}
+ /**
+ * For testing only!
+ */
+ int streamableBytes0(Http2Stream stream) {
+ return state(stream).streamableBytes;
+ }
+
/**
* The remote flow control state for a single stream.
*/
@@ -126,12 +135,13 @@ private final class State {
final Http2Stream stream;
int streamableBytes;
boolean enqueued;
+ boolean writing;
State(Http2Stream stream) {
this.stream = stream;
}
- void updateStreamableBytes(int newStreamableBytes, boolean hasFrame) {
+ void updateStreamableBytes(int newStreamableBytes, boolean hasFrame, int windowSize) {
assert hasFrame || newStreamableBytes == 0;
int delta = newStreamableBytes - streamableBytes;
@@ -139,7 +149,11 @@ void updateStreamableBytes(int newStreamableBytes, boolean hasFrame) {
streamableBytes = newStreamableBytes;
totalStreamableBytes += delta;
}
- if (hasFrame) {
+ // We should queue this state if there is a frame. We don't want to queue this frame if the window
+ // size is <= 0 and we are writing this state. The rational being we already gave this state the chance to
+ // write, and if there were empty frames the expectation is they would have been sent. At this point there
+ // must be a call to updateStreamableBytes for this state to be able to write again.
+ if (hasFrame && (!writing || windowSize > 0)) {
// It's not in the queue but has data to send, add it.
addToQueue();
}
@@ -150,15 +164,14 @@ void updateStreamableBytes(int newStreamableBytes, boolean hasFrame) {
* assuming all of the bytes will be written.
*/
void write(int numBytes, Writer writer) throws Http2Exception {
- // Update the streamable bytes, assuming that all the bytes will be written.
- int newStreamableBytes = streamableBytes - numBytes;
- updateStreamableBytes(newStreamableBytes, newStreamableBytes > 0);
-
+ writing = true;
try {
// Write the allocated bytes.
writer.write(stream, numBytes);
} catch (Throwable t) {
throw connectionError(INTERNAL_ERROR, t, "byte distribution write error");
+ } finally {
+ writing = false;
}
}
@@ -181,7 +194,7 @@ void close() {
removeFromQueue();
// Clear the streamable bytes.
- updateStreamableBytes(0, false);
+ updateStreamableBytes(0, false, 0);
}
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributor.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributor.java
index 06267334f31..ee20b1793b9 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributor.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributor.java
@@ -21,6 +21,7 @@
import java.util.Queue;
import static io.netty.handler.codec.http2.Http2CodecUtil.CONNECTION_STREAM_ID;
+import static io.netty.handler.codec.http2.Http2CodecUtil.streamableBytes;
import static io.netty.handler.codec.http2.Http2Error.INTERNAL_ERROR;
import static io.netty.handler.codec.http2.Http2Exception.connectionError;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
@@ -104,8 +105,8 @@ public void onPriorityTreeParentChanging(Http2Stream stream, Http2Stream newPare
@Override
public void updateStreamableBytes(StreamState state) {
- state(state.stream()).updateStreamableBytes(state.streamableBytes(),
- state.hasFrame() && state.isWriteAllowed());
+ state(state.stream()).updateStreamableBytes(streamableBytes(state),
+ state.hasFrame() && state.windowSize() >= 0);
}
@Override
@@ -204,7 +205,7 @@ private State state(Http2Stream stream) {
/**
* For testing only!
*/
- int streamableBytes(Http2Stream stream) {
+ int streamableBytes0(Http2Stream stream) {
return state(stream).streamableBytes;
}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
index 2d93c63ccb0..c98dfd0b7d1 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
@@ -58,7 +58,7 @@
/**
* Tests for {@link DefaultHttp2RemoteFlowController}.
*/
-public class DefaultHttp2RemoteFlowControllerTest {
+public abstract class DefaultHttp2RemoteFlowControllerTest {
private static final int STREAM_A = 1;
private static final int STREAM_B = 3;
private static final int STREAM_C = 5;
@@ -114,9 +114,11 @@ public void setup() throws Http2Exception {
reset(listener);
}
+ protected abstract StreamByteDistributor newDistributor(Http2Connection connection);
+
private void initConnectionAndController() throws Http2Exception {
connection = new DefaultHttp2Connection(false);
- controller = new DefaultHttp2RemoteFlowController(connection, listener);
+ controller = new DefaultHttp2RemoteFlowController(connection, newDistributor(connection), listener);
connection.remote().flowController(controller);
connection.local().createStream(STREAM_A, false);
@@ -926,7 +928,6 @@ private static Http2RemoteFlowController.FlowControlled mockedFlowControlledThat
mock(Http2RemoteFlowController.FlowControlled.class);
when(flowControlled.size()).thenReturn(100);
doAnswer(new Answer<Void>() {
- private int invocationCount;
@Override
public Void answer(InvocationOnMock in) throws Throwable {
// Write most of the bytes and then fail
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/PriorityStreamByteDistributorTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/PriorityStreamByteDistributorTest.java
index 2a804cf09bb..5178686de95 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/PriorityStreamByteDistributorTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/PriorityStreamByteDistributorTest.java
@@ -639,7 +639,7 @@ private Http2Stream stream(int streamId) {
return connection.stream(streamId);
}
- private void updateStream(final int streamId, final int streamableBytes, final boolean hasFrame) {
+ private void updateStream(final int streamId, final int pendingBytes, final boolean hasFrame) {
final Http2Stream stream = stream(streamId);
distributor.updateStreamableBytes(new StreamByteDistributor.StreamState() {
@Override
@@ -648,8 +648,8 @@ public Http2Stream stream() {
}
@Override
- public int streamableBytes() {
- return streamableBytes;
+ public int pendingBytes() {
+ return pendingBytes;
}
@Override
@@ -658,8 +658,8 @@ public boolean hasFrame() {
}
@Override
- public boolean isWriteAllowed() {
- return hasFrame;
+ public int windowSize() {
+ return pendingBytes;
}
});
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorFlowControllerTest.java
new file mode 100644
index 00000000000..59ff2bb9b27
--- /dev/null
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorFlowControllerTest.java
@@ -0,0 +1,22 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+public class UniformStreamByteDistributorFlowControllerTest extends DefaultHttp2RemoteFlowControllerTest {
+ @Override
+ protected StreamByteDistributor newDistributor(Http2Connection connection) {
+ return new UniformStreamByteDistributor(connection);
+ }
+}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
index c68a09f8fef..98ae689627d 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
@@ -21,9 +21,12 @@
import static org.junit.Assert.assertSame;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.anyInt;
import static org.mockito.Matchers.eq;
import static org.mockito.Matchers.same;
import static org.mockito.Mockito.atMost;
+import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.doThrow;
import static org.mockito.Mockito.reset;
@@ -35,6 +38,8 @@
import org.mockito.ArgumentCaptor;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
import org.mockito.verification.VerificationMode;
/**
@@ -61,6 +66,9 @@ public void setup() throws Http2Exception {
connection = new DefaultHttp2Connection(false);
distributor = new UniformStreamByteDistributor(connection);
+ // Assume we always write all the allocated bytes.
+ resetWriter();
+
connection.local().createStream(STREAM_A, false);
connection.local().createStream(STREAM_B, false);
Http2Stream streamC = connection.local().createStream(STREAM_C, false);
@@ -69,6 +77,24 @@ public void setup() throws Http2Exception {
streamD.setPriority(STREAM_A, DEFAULT_PRIORITY_WEIGHT, false);
}
+ private Answer<Void> writeAnswer() {
+ return new Answer<Void>() {
+ @Override
+ public Void answer(InvocationOnMock in) throws Throwable {
+ Http2Stream stream = in.getArgumentAt(0, Http2Stream.class);
+ int numBytes = in.getArgumentAt(1, Integer.class);
+ int streamableBytes = distributor.streamableBytes0(stream) - numBytes;
+ updateStream(stream.id(), streamableBytes, streamableBytes > 0);
+ return null;
+ }
+ };
+ }
+
+ private void resetWriter() {
+ reset(writer);
+ doAnswer(writeAnswer()).when(writer).write(any(Http2Stream.class), anyInt());
+ }
+
@Test
public void bytesUnassignedAfterProcessing() throws Http2Exception {
updateStream(STREAM_A, 1, true);
@@ -145,7 +171,7 @@ public void minChunkShouldBeAllocatedPerStream() throws Http2Exception {
assertEquals(CHUNK_SIZE, captureWrite(STREAM_C));
verifyNoMoreInteractions(writer);
- reset(writer);
+ resetWriter();
// Now write again and verify that the last stream is written to.
assertFalse(write(CHUNK_SIZE));
@@ -163,7 +189,7 @@ public void streamWithMoreDataShouldBeEnqueuedAfterWrite() throws Http2Exception
assertEquals(CHUNK_SIZE, captureWrite(STREAM_A));
verifyNoMoreInteractions(writer);
- reset(writer);
+ resetWriter();
// Now write the rest of the data.
assertFalse(write(CHUNK_SIZE));
@@ -193,7 +219,7 @@ private void updateStream(final int streamId, final int streamableBytes, final b
updateStream(streamId, streamableBytes, hasFrame, hasFrame);
}
- private void updateStream(final int streamId, final int streamableBytes, final boolean hasFrame,
+ private void updateStream(final int streamId, final int pendingBytes, final boolean hasFrame,
final boolean isWriteAllowed) {
final Http2Stream stream = stream(streamId);
distributor.updateStreamableBytes(new StreamByteDistributor.StreamState() {
@@ -203,8 +229,8 @@ public Http2Stream stream() {
}
@Override
- public int streamableBytes() {
- return streamableBytes;
+ public int pendingBytes() {
+ return pendingBytes;
}
@Override
@@ -213,8 +239,8 @@ public boolean hasFrame() {
}
@Override
- public boolean isWriteAllowed() {
- return isWriteAllowed;
+ public int windowSize() {
+ return isWriteAllowed ? pendingBytes : -1;
}
});
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributorTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributorTest.java
index b876c99540c..9c4055e0c2a 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributorTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/WeightedFairQueueByteDistributorTest.java
@@ -82,7 +82,7 @@ private Answer<Void> writeAnswer() {
public Void answer(InvocationOnMock in) throws Throwable {
Http2Stream stream = in.getArgumentAt(0, Http2Stream.class);
int numBytes = in.getArgumentAt(1, Integer.class);
- int streamableBytes = distributor.streamableBytes(stream) - numBytes;
+ int streamableBytes = distributor.streamableBytes0(stream) - numBytes;
updateStream(stream.id(), streamableBytes, streamableBytes > 0);
return null;
}
@@ -913,7 +913,7 @@ private void updateStream(final int streamId, final int streamableBytes, final b
updateStream(streamId, streamableBytes, hasFrame, hasFrame);
}
- private void updateStream(final int streamId, final int streamableBytes, final boolean hasFrame,
+ private void updateStream(final int streamId, final int pendingBytes, final boolean hasFrame,
final boolean isWriteAllowed) {
final Http2Stream stream = stream(streamId);
distributor.updateStreamableBytes(new StreamByteDistributor.StreamState() {
@@ -923,8 +923,8 @@ public Http2Stream stream() {
}
@Override
- public int streamableBytes() {
- return streamableBytes;
+ public int pendingBytes() {
+ return pendingBytes;
}
@Override
@@ -933,8 +933,8 @@ public boolean hasFrame() {
}
@Override
- public boolean isWriteAllowed() {
- return isWriteAllowed;
+ public int windowSize() {
+ return isWriteAllowed ? pendingBytes : -1;
}
});
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/WeightedFairQueueRemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/WeightedFairQueueRemoteFlowControllerTest.java
new file mode 100644
index 00000000000..69a89dc7635
--- /dev/null
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/WeightedFairQueueRemoteFlowControllerTest.java
@@ -0,0 +1,22 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+public class WeightedFairQueueRemoteFlowControllerTest extends DefaultHttp2RemoteFlowControllerTest {
+ @Override
+ protected StreamByteDistributor newDistributor(Http2Connection connection) {
+ return new WeightedFairQueueByteDistributor(connection);
+ }
+}
| train | train | 2015-12-18T22:51:52 | 2015-12-17T21:40:52Z | Scottmitch | val |
netty/netty/4545_4601 | netty/netty | netty/netty/4545 | netty/netty/4601 | [
"timestamp(timedelta=15.0, similarity=0.9236626301501217)"
] | 7b2f55ec2fa83873c4eeacb7fbccaecfedece63e | df63625877189674ac0135ef9710df245eecfd94 | [
"@nmittler - FYI\n",
"@Scottmitch good catch! ... do you want to throw together a PR with the updated test?\n",
"@nmittler - Yes. I can handle this. However the new interface required to fix this is introduced and used in PR #4538. Would you mind if I fixed after this issue after PR #4538 is resolved?\n",
"@S... | [
"Maybe instead of using another variable `writeAllowed`, it might be more clear if we just use `windowSize` directly:\n\n``` java\nif (state.windowSize <= 0) {\n continue;\n}\n```\n",
"@nmittler - We could do this, but boolean should be less memory, and this would be the only reason why we need `windowSize`. ... | 2015-12-18T19:15:53Z | [
"defect"
] | HTTP/2 UniformStreamByteDistributor stream window ignored | UniformStreamByteDistributor will write to a stream even if the stream's flow control window is negative. This is not allowed by the rfc.
https://tools.ietf.org/html/rfc7540#section-6.9.2
> A change to SETTINGS_INITIAL_WINDOW_SIZE can cause the available
> space in a flow-control window to become negative. A sender MUST
> track the negative flow-control window and MUST NOT send new flow-
> controlled frames until it receives WINDOW_UPDATE frames that cause
> the flow-control window to become positive.
I guess the spec also means that you should not send new flow control frames until the flow-control window becomes non-negative? If the flow-control window is 0 and we have empty frames I don't think it should be a problem to send these.
While working on PR https://github.com/netty/netty/pull/4538 I developed the following test case to demonstrate the issue:
``` diff
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
index 44010e4..c52b678 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
@@ -185,11 +185,29 @@ public class UniformStreamByteDistributorTest {
verifyNoMoreInteractions(writer);
}
+ @Test
+ public void streamWindowExhaustedDoesNotWrite() throws Http2Exception {
+ updateStream(STREAM_A, 0, true, false);
+ updateStream(STREAM_B, 0, true);
+ updateStream(STREAM_C, 0, true);
+ updateStream(STREAM_D, 0, true, false);
+
+ assertFalse(write(10));
+ verifyWrite(STREAM_B, 0);
+ verifyWrite(STREAM_C, 0);
+ verifyNoMoreInteractions(writer);
+ }
+
private Http2Stream stream(int streamId) {
return connection.stream(streamId);
}
private void updateStream(final int streamId, final int streamableBytes, final boolean hasFrame) {
+ updateStream(streamId, streamableBytes, hasFrame, hasFrame);
+ }
+
+ private void updateStream(final int streamId, final int streamableBytes, final boolean hasFrame,
+ final boolean isWriteAllowed) {
final Http2Stream stream = stream(streamId);
distributor.updateStreamableBytes(new StreamByteDistributor.StreamState() {
@Override
@@ -206,6 +224,11 @@ public class UniformStreamByteDistributorTest {
public boolean hasFrame() {
return hasFrame;
}
+
+ @Override
+ public boolean isWriteAllowed() {
+ return isWriteAllowed;
+ }
});
}
```
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java
index 9b3cd2daded..565ddfaa260 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/UniformStreamByteDistributor.java
@@ -97,10 +97,13 @@ public boolean distribute(int maxBytes, Writer writer) throws Http2Exception {
State state = queue.pollFirst();
do {
state.enqueued = false;
- if (state.streamableBytes > 0 && maxBytes == 0) {
- // Stop at the first state that can't send. Add this state back to the head of
- // the queue. Note that empty frames at the head of the queue will always be
- // written.
+ if (state.windowNegative) {
+ continue;
+ }
+ if (maxBytes == 0 && state.streamableBytes > 0) {
+ // Stop at the first state that can't send. Add this state back to the head of the queue. Note
+ // that empty frames at the head of the queue will always be written, assuming the stream window
+ // is not negative.
queue.addFirst(state);
state.enqueued = true;
break;
@@ -134,6 +137,7 @@ int streamableBytes0(Http2Stream stream) {
private final class State {
final Http2Stream stream;
int streamableBytes;
+ boolean windowNegative;
boolean enqueued;
boolean writing;
@@ -149,12 +153,15 @@ void updateStreamableBytes(int newStreamableBytes, boolean hasFrame, int windowS
streamableBytes = newStreamableBytes;
totalStreamableBytes += delta;
}
- // We should queue this state if there is a frame. We don't want to queue this frame if the window
- // size is <= 0 and we are writing this state. The rational being we already gave this state the chance to
- // write, and if there were empty frames the expectation is they would have been sent. At this point there
- // must be a call to updateStreamableBytes for this state to be able to write again.
- if (hasFrame && (!writing || windowSize > 0)) {
- // It's not in the queue but has data to send, add it.
+ // In addition to only enqueuing state when they have frames we enforce the following restrictions:
+ // 1. If the window has gone negative. We never want to queue a state. However we also don't want to
+ // Immediately remove the item if it is already queued because removal from dequeue is O(n). So
+ // we allow it to stay queued and rely on the distribution loop to remove this state.
+ // 2. If the window is zero we only want to queue if we are not writing. If we are writing that means
+ // we gave the state a chance to write zero length frames. We wait until updateStreamableBytes is
+ // called again before this state is allowed to write.
+ windowNegative = windowSize < 0;
+ if (hasFrame && (windowSize > 0 || (windowSize == 0 && !writing))) {
addToQueue();
}
}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
index 98ae689627d..fc358468b09 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/UniformStreamByteDistributorTest.java
@@ -211,6 +211,19 @@ public void emptyFrameAtHeadIsWritten() throws Http2Exception {
verifyNoMoreInteractions(writer);
}
+ @Test
+ public void streamWindowExhaustedDoesNotWrite() throws Http2Exception {
+ updateStream(STREAM_A, 0, true, false);
+ updateStream(STREAM_B, 0, true);
+ updateStream(STREAM_C, 0, true);
+ updateStream(STREAM_D, 0, true, false);
+
+ assertFalse(write(10));
+ verifyWrite(STREAM_B, 0);
+ verifyWrite(STREAM_C, 0);
+ verifyNoMoreInteractions(writer);
+ }
+
private Http2Stream stream(int streamId) {
return connection.stream(streamId);
}
| train | train | 2015-12-23T01:59:40 | 2015-12-09T02:25:07Z | Scottmitch | val |
netty/netty/4600_4618 | netty/netty | netty/netty/4600 | netty/netty/4618 | [
"timestamp(timedelta=4.0, similarity=0.9201898721503884)"
] | f22ad97cf30c63c0d93d7c40541da10a4de66f5f | cd5093db358a07bd8178928fe23b4eac1b1df474 | [
"@louiscryan @nmittler @ejona86 - I checked in gRPC and it looks like this is still not being used? Is the plan still to use this or eventually consume the stream writability changes?\n",
"So far it has not proven critical to performance and there are other perf improvements I would prioritize in GRPC above the u... | [] | 2015-12-22T19:25:14Z | [
"cleanup"
] | HTTP/2 RemoteFlowController$Listener.streamWritten | We should evaluate whether [Http2RemoteFlowController$Listener.streamWritten](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2RemoteFlowController.java#L151) is necessary. If not we should remove it.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2RemoteFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2RemoteFlowController.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index 2db3e708652..c99f86d6278 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -205,7 +205,7 @@ private boolean isChannelWritable0() {
@Override
public void listener(Listener listener) {
- monitor = listener == null ? new DefaultWritabilityMonitor() : new ListenerWritabilityMonitor(listener);
+ monitor = listener == null ? new WritabilityMonitor() : new ListenerWritabilityMonitor(listener);
}
@Override
@@ -627,13 +627,14 @@ final void markWritability(boolean isWritable) {
/**
* Abstract class which provides common functionality for writability monitor implementations.
*/
- private abstract class WritabilityMonitor {
+ private class WritabilityMonitor {
private long totalPendingBytes;
- private final Writer writer;
-
- protected WritabilityMonitor(Writer writer) {
- this.writer = writer;
- }
+ private final Writer writer = new StreamByteDistributor.Writer() {
+ @Override
+ public void write(Http2Stream stream, int numBytes) {
+ state(stream).writeAllocatedBytes(numBytes);
+ }
+ };
/**
* Called when the writability of the underlying channel changes.
@@ -728,20 +729,6 @@ protected final boolean isWritableConnection() {
}
}
- /**
- * Provides no notification or tracking of writablity changes.
- */
- private final class DefaultWritabilityMonitor extends WritabilityMonitor {
- DefaultWritabilityMonitor() {
- super(new StreamByteDistributor.Writer() {
- @Override
- public void write(Http2Stream stream, int numBytes) {
- state(stream).writeAllocatedBytes(numBytes);
- }
- });
- }
- }
-
/**
* Writability of a {@code stream} is calculated using the following:
* <pre>
@@ -763,17 +750,7 @@ public boolean visit(Http2Stream stream) throws Http2Exception {
}
};
- ListenerWritabilityMonitor(final Listener listener) {
- super(new StreamByteDistributor.Writer() {
- @Override
- public void write(Http2Stream stream, int numBytes) {
- AbstractState state = state(stream);
- int written = state.writeAllocatedBytes(numBytes);
- if (written != -1) {
- listener.streamWritten(state.stream(), written);
- }
- }
- });
+ ListenerWritabilityMonitor(Listener listener) {
this.listener = listener;
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2RemoteFlowController.java
index 279129c3632..83fe96da842 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2RemoteFlowController.java
@@ -138,18 +138,6 @@ interface FlowControlled {
* Listener to the number of flow-controlled bytes written per stream.
*/
interface Listener {
-
- /**
- * Report the number of {@code writtenBytes} for a {@code stream}. Called after the
- * flow-controller has flushed bytes for the given stream.
- * <p>
- * This method should not throw. Any thrown exceptions are considered a programming error and are ignored.
- * @param stream that had bytes written.
- * @param writtenBytes the number of bytes written for a stream, can be 0 in the case of an
- * empty DATA frame.
- */
- void streamWritten(Http2Stream stream, int writtenBytes);
-
/**
* Notification that {@link Http2RemoteFlowController#isWritable(Http2Stream)} has changed for {@code stream}.
* <p>
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
index c98dfd0b7d1..58742311dd2 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
@@ -170,7 +170,6 @@ public void payloadSmallerThanWindowShouldBeWrittenImmediately() throws Http2Exc
verifyZeroInteractions(listener);
controller.writePendingBytes();
data.assertFullyWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 5);
verifyZeroInteractions(listener);
}
@@ -181,7 +180,6 @@ public void emptyPayloadShouldBeWrittenImmediately() throws Http2Exception {
data.assertNotWritten();
controller.writePendingBytes();
data.assertFullyWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 0);
verifyZeroInteractions(listener);
}
@@ -210,7 +208,6 @@ public void payloadsShouldMerge() throws Http2Exception {
controller.writePendingBytes();
data1.assertFullyWritten();
data2.assertNotWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 15);
verify(listener, times(1)).writabilityChanged(stream(STREAM_A));
assertFalse(controller.isWritable(stream(STREAM_A)));
}
@@ -267,7 +264,6 @@ public void payloadLargerThanWindowShouldWritePartial() throws Http2Exception {
controller.writePendingBytes();
// Verify that a partial frame of 5 remains to be sent
data.assertPartiallyWritten(5);
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 5);
verify(listener, times(1)).writabilityChanged(stream(STREAM_A));
assertFalse(controller.isWritable(stream(STREAM_A)));
verifyNoMoreInteractions(listener);
@@ -286,7 +282,6 @@ public void windowUpdateAndFlushShouldTriggerWrite() throws Http2Exception {
controller.writePendingBytes();
data.assertPartiallyWritten(10);
moreData.assertNotWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 10);
verify(listener, times(1)).writabilityChanged(stream(STREAM_A));
assertFalse(controller.isWritable(stream(STREAM_A)));
reset(listener);
@@ -302,7 +297,6 @@ public void windowUpdateAndFlushShouldTriggerWrite() throws Http2Exception {
data.assertFullyWritten();
moreData.assertPartiallyWritten(5);
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 15);
verify(listener, never()).writabilityChanged(stream(STREAM_A));
assertFalse(controller.isWritable(stream(STREAM_A)));
@@ -331,7 +325,6 @@ public void initialWindowUpdateShouldSendPayload() throws Http2Exception {
// Verify that the entire frame was sent.
controller.initialWindowSize(10);
data.assertFullyWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 10);
assertWritabilityChanged(0, false);
}
@@ -356,7 +349,6 @@ public void successiveSendsShouldNotInteract() throws Http2Exception {
dataA.assertPartiallyWritten(8);
assertEquals(65527, window(STREAM_A));
assertEquals(0, window(CONNECTION_STREAM_ID));
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 8);
assertWritabilityChanged(0, false);
reset(listener);
@@ -376,11 +368,9 @@ public void successiveSendsShouldNotInteract() throws Http2Exception {
// Verify the rest of A is written.
dataA.assertFullyWritten();
assertEquals(65525, window(STREAM_A));
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 2);
dataB.assertFullyWritten();
assertEquals(65525, window(STREAM_B));
- verify(listener, times(1)).streamWritten(stream(STREAM_B), 10);
verifyNoMoreInteractions(listener);
}
@@ -399,7 +389,6 @@ public void negativeWindowShouldNotThrowException() throws Http2Exception {
sendData(STREAM_A, data1);
controller.writePendingBytes();
data1.assertFullyWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 20);
assertTrue(window(CONNECTION_STREAM_ID) > 0);
verify(listener, times(1)).writabilityChanged(stream(STREAM_A));
verify(listener, never()).writabilityChanged(stream(STREAM_B));
@@ -513,8 +502,6 @@ public void initialWindowUpdateShouldSendEmptyFrame() throws Http2Exception {
data.assertFullyWritten();
data2.assertFullyWritten();
-
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 10);
}
@Test
@@ -540,7 +527,6 @@ public void initialWindowUpdateShouldSendPartialFrame() throws Http2Exception {
assertTrue(controller.isWritable(stream(STREAM_D)));
data.assertPartiallyWritten(5);
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 5);
}
@Test
@@ -565,7 +551,6 @@ public void connectionWindowUpdateShouldSendFrame() throws Http2Exception {
controller.writePendingBytes();
data.assertFullyWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 10);
assertWritabilityChanged(0, false);
assertEquals(0, window(CONNECTION_STREAM_ID));
assertEquals(DEFAULT_WINDOW_SIZE - 10, window(STREAM_A));
@@ -594,7 +579,6 @@ public void connectionWindowUpdateShouldSendPartialFrame() throws Http2Exception
controller.writePendingBytes();
data.assertPartiallyWritten(5);
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 5);
assertWritabilityChanged(0, false);
assertEquals(0, window(CONNECTION_STREAM_ID));
assertEquals(DEFAULT_WINDOW_SIZE - 5, window(STREAM_A));
@@ -637,7 +621,6 @@ public void streamWindowUpdateShouldSendFrame() throws Http2Exception {
data.assertNotWritten();
controller.writePendingBytes();
data.assertFullyWritten();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 10);
verify(listener, never()).writabilityChanged(stream(STREAM_A));
verify(listener, never()).writabilityChanged(stream(STREAM_B));
verify(listener, never()).writabilityChanged(stream(STREAM_C));
@@ -687,7 +670,6 @@ public void streamWindowUpdateShouldSendPartialFrame() throws Http2Exception {
data.assertNotWritten();
controller.writePendingBytes();
data.assertPartiallyWritten(5);
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 5);
assertEquals(DEFAULT_WINDOW_SIZE - 5, window(CONNECTION_STREAM_ID));
assertEquals(0, window(STREAM_A));
assertEquals(DEFAULT_WINDOW_SIZE, window(STREAM_B));
@@ -717,7 +699,6 @@ public Void answer(InvocationOnMock invocationOnMock) {
verify(flowControlled, never()).writeComplete();
assertEquals(90, windowBefore - window(STREAM_A));
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 90);
assertWritabilityChanged(0, true);
}
@@ -794,7 +775,6 @@ public Void answer(InvocationOnMock invocationOnMock) {
verify(flowControlled).writeComplete();
assertEquals(150, windowBefore - window(STREAM_A));
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 150);
assertWritabilityChanged(0, true);
}
@@ -820,7 +800,6 @@ public Void answer(InvocationOnMock invocationOnMock) {
verify(flowControlled).write(any(ChannelHandlerContext.class), anyInt());
verify(flowControlled).error(any(ChannelHandlerContext.class), any(Throwable.class));
verify(flowControlled, never()).writeComplete();
- verify(listener, times(1)).streamWritten(stream(STREAM_A), 0);
verify(listener, times(1)).writabilityChanged(stream(STREAM_A));
verify(listener, never()).writabilityChanged(stream(STREAM_B));
verify(listener, never()).writabilityChanged(stream(STREAM_C));
| train | train | 2015-12-22T09:20:32 | 2015-12-18T19:07:27Z | Scottmitch | val |
netty/netty/4604_4619 | netty/netty | netty/netty/4604 | netty/netty/4619 | [
"timestamp(timedelta=27.0, similarity=0.8533197798933183)"
] | 7b2f55ec2fa83873c4eeacb7fbccaecfedece63e | c56b712767c97b4c9847eea21cd0c89e9008113b | [
"Sounds like an issue with loading the jni stuff. Can you show me how you specify the dependencies ?\n",
"```\n <dependency>\n <groupId>io.netty</groupId>\n <artifactId>netty-all</artifactId>\n <version>5.0.0.Alpha2</version>\n <scope>test</scope>\n </dependency>\n```\n\nit works if I co... | [
"consider making this a member variable and using a `@Before` to initialize and `@After` to tear it down.\n"
] | 2015-12-22T19:32:58Z | [
"defect"
] | isKeepAlive is not supported by Epoll | Calling isKeepAlive throws an exception
```
java.lang.UnsatisfiedLinkError: io.netty.channel.epoll.Native.isKeepAlive(I)I
at io.netty.channel.epoll.Native.isKeepAlive(Native Method)
at io.netty.channel.epoll.EpollSocketChannelConfig.isKeepAlive(EpollSocketChannelConfig.java:154)
```
| [
"transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.c",
"transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.h"
] | [
"transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.c",
"transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.h"
] | [
"transport-native-epoll/src/test/java/io/netty/channel/unix/SocketTest.java"
] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.c b/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.c
index 60d7c4bc752..2800a69075e 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.c
@@ -662,6 +662,14 @@ JNIEXPORT void JNICALL Java_io_netty_channel_unix_Socket_setSoLinger(JNIEnv* env
netty_unix_socket_setOption(env, fd, SOL_SOCKET, SO_LINGER, &solinger, sizeof(solinger));
}
+JNIEXPORT jint JNICALL Java_io_netty_channel_unix_Socket_isKeepAlive(JNIEnv* env, jclass clazz, jint fd) {
+ int optval;
+ if (netty_unix_socket_getOption(env, fd, SOL_SOCKET, SO_KEEPALIVE, &optval, sizeof(optval)) == -1) {
+ return -1;
+ }
+ return optval;
+}
+
JNIEXPORT jint JNICALL Java_io_netty_channel_unix_Socket_isTcpNoDelay(JNIEnv* env, jclass clazz, jint fd) {
int optval;
if (netty_unix_socket_getOption(env, fd, IPPROTO_TCP, TCP_NODELAY, &optval, sizeof(optval)) == -1) {
diff --git a/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.h b/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.h
index af126bf6903..c82138252bc 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.h
+++ b/transport-native-epoll/src/main/c/io_netty_channel_unix_Socket.h
@@ -51,6 +51,7 @@ void Java_io_netty_channel_unix_Socket_setSoLinger(JNIEnv* env, jclass clazz, ji
jint Java_io_netty_channel_unix_Socket_isTcpNoDelay(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_unix_Socket_getReceiveBufferSize(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_unix_Socket_getSendBufferSize(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_unix_Socket_isKeepAlive(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_unix_Socket_isTcpCork(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_unix_Socket_getSoLinger(JNIEnv* env, jclass clazz, jint fd);
jint Java_io_netty_channel_unix_Socket_getSoError(JNIEnv* env, jclass clazz, jint fd);
| diff --git a/transport-native-epoll/src/test/java/io/netty/channel/unix/SocketTest.java b/transport-native-epoll/src/test/java/io/netty/channel/unix/SocketTest.java
new file mode 100644
index 00000000000..0fe9df0f385
--- /dev/null
+++ b/transport-native-epoll/src/test/java/io/netty/channel/unix/SocketTest.java
@@ -0,0 +1,123 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.unix;
+
+import io.netty.channel.epoll.Epoll;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static org.junit.Assert.*;
+
+public class SocketTest {
+
+ static {
+ Epoll.ensureAvailability();
+ }
+
+ private Socket socket;
+
+ @Before
+ public void setup() {
+ socket = Socket.newSocketStream();
+ }
+
+ @After
+ public void tearDown() throws IOException {
+ socket.close();
+ }
+
+ @Test
+ public void testKeepAlive() throws Exception {
+ Socket socket = Socket.newSocketStream();
+ try {
+ assertFalse(socket.isKeepAlive());
+ socket.setKeepAlive(true);
+ assertTrue(socket.isKeepAlive());
+ } finally {
+ socket.close();
+ }
+ }
+
+ @Test
+ public void testTcpCork() throws Exception {
+ Socket socket = Socket.newSocketStream();
+ try {
+ assertFalse(socket.isTcpCork());
+ socket.setTcpCork(true);
+ assertTrue(socket.isTcpCork());
+ } finally {
+ socket.close();
+ }
+ }
+
+ @Test
+ public void testTcpNoDelay() throws Exception {
+ Socket socket = Socket.newSocketStream();
+ try {
+ assertFalse(socket.isTcpNoDelay());
+ socket.setTcpNoDelay(true);
+ assertTrue(socket.isTcpNoDelay());
+ } finally {
+ socket.close();
+ }
+ }
+
+ @Test
+ public void testReceivedBufferSize() throws Exception {
+ Socket socket = Socket.newSocketStream();
+ try {
+ int size = socket.getReceiveBufferSize();
+ int newSize = 65535;
+ assertTrue(size > 0);
+ socket.setReceiveBufferSize(newSize);
+ // Linux usually set it to double what is specified
+ assertTrue(newSize <= socket.getReceiveBufferSize());
+ } finally {
+ socket.close();
+ }
+ }
+
+ @Test
+ public void testSendBufferSize() throws Exception {
+ Socket socket = Socket.newSocketStream();
+ try {
+ int size = socket.getSendBufferSize();
+ int newSize = 65535;
+ assertTrue(size > 0);
+ socket.setSendBufferSize(newSize);
+ // Linux usually set it to double what is specified
+ assertTrue(newSize <= socket.getSendBufferSize());
+ } finally {
+ socket.close();
+ }
+ }
+
+ @Test
+ public void testSoLinger() throws Exception {
+ Socket socket = Socket.newSocketStream();
+ try {
+ assertEquals(-1, socket.getSoLinger());
+ socket.setSoLinger(10);
+ assertEquals(10, socket.getSoLinger());
+ } finally {
+ socket.close();
+ }
+ }
+}
+
| train | train | 2015-12-23T01:59:40 | 2015-12-18T22:02:30Z | vrozov | val |
netty/netty/2973_4631 | netty/netty | netty/netty/2973 | netty/netty/4631 | [
"timestamp(timedelta=32.0, similarity=0.8424088020104541)"
] | 6ee5341cdfa397a7b2ca123b0f8a5f3170133429 | cd7371f18b8d5483c1fc5b94d1061eb4611344c6 | [
"@Scottmitch is this still valid ?\n",
"@normanmaurer - I think so. I have not done anything to fix it.\n",
"Fixed\n"
] | [
"can you explain why we need an array?\n",
"check `if (!timeout[0].isDone()) {..}` before adding the object?\n",
"I wonder if we need to keep a collection of timeouts. Because the time is fixed, each new event will alway fire later (in time) than the previous event. Can we just keep a reference to the \"last\" ... | 2015-12-28T08:45:04Z | [
"defect"
] | WriteTimeoutHandler missed cleanup | It seems like the `WriteTimeoutHandler` does not always cancel the timeout task it schedules in all the same cases that the and IdleStateHandler does. For example the `handlerRemoved` method is not overridden in `WrtieTimeoutHandler`. There may be other cases where the timeout is not canceled when it should be and further investigation is needed.
| [
"handler/src/main/java/io/netty/handler/timeout/WriteTimeoutHandler.java"
] | [
"handler/src/main/java/io/netty/handler/timeout/WriteTimeoutHandler.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/timeout/WriteTimeoutHandler.java b/handler/src/main/java/io/netty/handler/timeout/WriteTimeoutHandler.java
index 4e87c7f17c0..d47760a857e 100644
--- a/handler/src/main/java/io/netty/handler/timeout/WriteTimeoutHandler.java
+++ b/handler/src/main/java/io/netty/handler/timeout/WriteTimeoutHandler.java
@@ -30,12 +30,10 @@
import java.util.concurrent.TimeUnit;
/**
- * Raises a {@link WriteTimeoutException} when no data was written within a
- * certain period of time.
+ * Raises a {@link WriteTimeoutException} when a write operation cannot finish in a certain period of time.
*
* <pre>
- * // The connection is closed when there is no outbound traffic
- * // for 30 seconds.
+ * // The connection is closed when a write operation cannot finish in 30 seconds.
*
* public class MyChannelInitializer extends {@link ChannelInitializer}<{@link Channel}> {
* public void initChannel({@link Channel} channel) {
@@ -70,6 +68,11 @@ public class WriteTimeoutHandler extends ChannelOutboundHandlerAdapter {
private final long timeoutNanos;
+ /**
+ * A doubly-linked list to track all WriteTimeoutTasks
+ */
+ private WriteTimeoutTask lastTask;
+
private boolean closed;
/**
@@ -111,31 +114,62 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
ctx.write(msg, promise);
}
- private void scheduleTimeout(final ChannelHandlerContext ctx, final ChannelPromise future) {
+ @Override
+ public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
+ WriteTimeoutTask task = lastTask;
+ lastTask = null;
+ while (task != null) {
+ task.scheduledFuture.cancel(false);
+ WriteTimeoutTask prev = task.prev;
+ task.prev = null;
+ task.next = null;
+ task = prev;
+ }
+ }
+
+ private void scheduleTimeout(final ChannelHandlerContext ctx, final ChannelPromise promise) {
// Schedule a timeout.
- final ScheduledFuture<?> sf = ctx.executor().schedule(new OneTimeTask() {
- @Override
- public void run() {
- // Was not written yet so issue a write timeout
- // The future itself will be failed with a ClosedChannelException once the close() was issued
- // See https://github.com/netty/netty/issues/2159
- if (!future.isDone()) {
- try {
- writeTimedOut(ctx);
- } catch (Throwable t) {
- ctx.fireExceptionCaught(t);
- }
- }
- }
- }, timeoutNanos, TimeUnit.NANOSECONDS);
+ final WriteTimeoutTask task = new WriteTimeoutTask(ctx, promise);
+ task.scheduledFuture = ctx.executor().schedule(task, timeoutNanos, TimeUnit.NANOSECONDS);
+
+ if (!task.scheduledFuture.isDone()) {
+ addWriteTimeoutTask(task);
+
+ // Cancel the scheduled timeout if the flush promise is complete.
+ promise.addListener(task);
+ }
+ }
+
+ private void addWriteTimeoutTask(WriteTimeoutTask task) {
+ if (lastTask == null) {
+ lastTask = task;
+ } else {
+ lastTask.next = task;
+ task.prev = lastTask;
+ lastTask = task;
+ }
+ }
- // Cancel the scheduled timeout if the flush future is complete.
- future.addListener(new ChannelFutureListener() {
- @Override
- public void operationComplete(ChannelFuture future) throws Exception {
- sf.cancel(false);
+ private void removeWriteTimeoutTask(WriteTimeoutTask task) {
+ if (task == lastTask) {
+ // task is the tail of list
+ assert task.next == null;
+ lastTask = lastTask.prev;
+ if (lastTask != null) {
+ lastTask.next = null;
}
- });
+ } else if (task.prev == null && task.next == null) {
+ // Since task is not lastTask, then it has been removed or not been added.
+ return;
+ } else if (task.prev == null) {
+ // task is the head of list and the list has at least 2 nodes
+ task.next.prev = null;
+ } else {
+ task.prev.next = task.next;
+ task.next.prev = task.prev;
+ }
+ task.prev = null;
+ task.next = null;
}
/**
@@ -148,4 +182,43 @@ protected void writeTimedOut(ChannelHandlerContext ctx) throws Exception {
closed = true;
}
}
+
+ private final class WriteTimeoutTask extends OneTimeTask implements ChannelFutureListener {
+
+ private final ChannelHandlerContext ctx;
+ private final ChannelPromise promise;
+
+ // WriteTimeoutTask is also a node of a doubly-linked list
+ WriteTimeoutTask prev;
+ WriteTimeoutTask next;
+
+ ScheduledFuture<?> scheduledFuture;
+
+ WriteTimeoutTask(ChannelHandlerContext ctx, ChannelPromise promise) {
+ this.ctx = ctx;
+ this.promise = promise;
+ }
+
+ @Override
+ public void run() {
+ // Was not written yet so issue a write timeout
+ // The promise itself will be failed with a ClosedChannelException once the close() was issued
+ // See https://github.com/netty/netty/issues/2159
+ if (!promise.isDone()) {
+ try {
+ writeTimedOut(ctx);
+ } catch (Throwable t) {
+ ctx.fireExceptionCaught(t);
+ }
+ }
+ removeWriteTimeoutTask(this);
+ }
+
+ @Override
+ public void operationComplete(ChannelFuture future) throws Exception {
+ // scheduledFuture has already be set when reaching here
+ scheduledFuture.cancel(false);
+ removeWriteTimeoutTask(this);
+ }
+ }
}
| null | train | train | 2015-12-29T18:56:29 | 2014-10-07T01:31:46Z | Scottmitch | val |
netty/netty/4315_4639 | netty/netty | netty/netty/4315 | netty/netty/4639 | [
"timestamp(timedelta=48.0, similarity=0.8704097078767499)"
] | f90032933df8b27e8542fdeb40fb6381f2062563 | 72eb6fb85151373f9187be5d91ee6b542b587fc6 | [
"If you can make it with breakage use 4.0\n\n> Am 05.10.2015 um 17:36 schrieb Stephane Landelle notifications@github.com:\n> \n> Netty version: 4.0.32.Final\n> \n> Context:\n> In AHC, we have our own builder for requests, where we internally store headers into a HttpHeaders instance.\n> \n> Sadly, there's no way to... | [
"ObjectUtils.checkNonNull(...)\n",
"Replace with ObjectUtils.checkNonNull(...)\n",
"the getStatus -> the status\n",
"getMethod -> method\n"
] | 2015-12-30T21:49:55Z | [
"improvement"
] | Add DefaultHttpMessage constructor that takes a HttpHeaders instance | Netty version: 4.0.32.Final
Context:
In AHC, we have our own builder for requests, where we internally store headers into a `HttpHeaders` instance.
Sadly, there's no way to directly pass a `HttpHeaders` instance to `DefaultHttpMessage` hierarchy and we have to set our instance into the `DefaultHttpMessage`'s internal one that's been eagerly created.
This results is an excessive allocation that could be avoided.
Please let me know if you're interested in a PR, and which branch to target.
Regards
| [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java",
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java",
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpMessage.java",
"codec-http/src/main/java/io/netty/handler/codec/http/Default... | [
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java",
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java",
"codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpMessage.java",
"codec-http/src/main/java/io/netty/handler/codec/http/Default... | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java
index 2ca9329e7c6..285763e5448 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpRequest.java
@@ -18,6 +18,7 @@
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.util.IllegalReferenceCountException;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
/**
* Default implementation of {@link FullHttpRequest}.
@@ -25,7 +26,7 @@
public class DefaultFullHttpRequest extends DefaultHttpRequest implements FullHttpRequest {
private final ByteBuf content;
private final HttpHeaders trailingHeader;
- private final boolean validateHeaders;
+
/**
* Used to cache the value of the hash code and avoid {@link IllegalRefCountException}.
*/
@@ -46,12 +47,15 @@ public DefaultFullHttpRequest(HttpVersion httpVersion, HttpMethod method, String
public DefaultFullHttpRequest(HttpVersion httpVersion, HttpMethod method, String uri,
ByteBuf content, boolean validateHeaders) {
super(httpVersion, method, uri, validateHeaders);
- if (content == null) {
- throw new NullPointerException("content");
- }
- this.content = content;
+ this.content = checkNotNull(content, "content");
trailingHeader = new DefaultHttpHeaders(validateHeaders);
- this.validateHeaders = validateHeaders;
+ }
+
+ public DefaultFullHttpRequest(HttpVersion httpVersion, HttpMethod method, String uri,
+ ByteBuf content, HttpHeaders headers, HttpHeaders trailingHeader) {
+ super(httpVersion, method, uri, headers);
+ this.content = checkNotNull(content, "content");
+ this.trailingHeader = checkNotNull(trailingHeader, "trailingHeader");
}
@Override
@@ -137,13 +141,12 @@ public FullHttpRequest setUri(String uri) {
* @return A copy of this object
*/
private FullHttpRequest copy(boolean copyContent, ByteBuf newContent) {
- DefaultFullHttpRequest copy = new DefaultFullHttpRequest(
+ return new DefaultFullHttpRequest(
protocolVersion(), method(), uri(),
copyContent ? content().copy() :
- newContent == null ? Unpooled.buffer(0) : newContent);
- copy.headers().set(headers());
- copy.trailingHeaders().set(trailingHeaders());
- return copy;
+ newContent == null ? Unpooled.buffer(0) : newContent,
+ headers(),
+ trailingHeaders());
}
@Override
@@ -158,11 +161,8 @@ public FullHttpRequest copy() {
@Override
public FullHttpRequest duplicate() {
- DefaultFullHttpRequest duplicate = new DefaultFullHttpRequest(
- protocolVersion(), method(), uri(), content().duplicate(), validateHeaders);
- duplicate.headers().set(headers());
- duplicate.trailingHeaders().set(trailingHeaders());
- return duplicate;
+ return new DefaultFullHttpRequest(
+ protocolVersion(), method(), uri(), content().duplicate(), headers(), trailingHeaders());
}
@Override
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java
index 5ad6058daf9..a192ba99585 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultFullHttpResponse.java
@@ -28,7 +28,7 @@ public class DefaultFullHttpResponse extends DefaultHttpResponse implements Full
private final ByteBuf content;
private final HttpHeaders trailingHeaders;
- private final boolean validateHeaders;
+
/**
* Used to cache the value of the hash code and avoid {@link IllegalRefCountException}.
*/
@@ -62,7 +62,13 @@ public DefaultFullHttpResponse(HttpVersion version, HttpResponseStatus status,
this.content = checkNotNull(content, "content");
this.trailingHeaders = singleFieldHeaders ? new CombinedHttpHeaders(validateHeaders)
: new DefaultHttpHeaders(validateHeaders);
- this.validateHeaders = validateHeaders;
+ }
+
+ public DefaultFullHttpResponse(HttpVersion version, HttpResponseStatus status,
+ ByteBuf content, HttpHeaders headers, HttpHeaders trailingHeaders) {
+ super(version, status, headers);
+ this.content = checkNotNull(content, "content");
+ this.trailingHeaders = checkNotNull(trailingHeaders, "trailingHeaders");
}
@Override
@@ -142,13 +148,12 @@ public FullHttpResponse setStatus(HttpResponseStatus status) {
* @return A copy of this object
*/
private FullHttpResponse copy(boolean copyContent, ByteBuf newContent) {
- DefaultFullHttpResponse copy = new DefaultFullHttpResponse(
+ return new DefaultFullHttpResponse(
protocolVersion(), status(),
copyContent ? content().copy() :
- newContent == null ? Unpooled.buffer(0) : newContent);
- copy.headers().set(headers());
- copy.trailingHeaders().set(trailingHeaders());
- return copy;
+ newContent == null ? Unpooled.buffer(0) : newContent,
+ headers(),
+ trailingHeaders());
}
@Override
@@ -163,11 +168,8 @@ public FullHttpResponse copy() {
@Override
public FullHttpResponse duplicate() {
- DefaultFullHttpResponse duplicate = new DefaultFullHttpResponse(protocolVersion(), status(),
- content().duplicate(), validateHeaders);
- duplicate.headers().set(headers());
- duplicate.trailingHeaders().set(trailingHeaders());
- return duplicate;
+ return new DefaultFullHttpResponse(protocolVersion(), status(),
+ content().duplicate(), headers(), trailingHeaders());
}
@Override
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpMessage.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpMessage.java
index 82fd0cb4fbb..5a6a45a376d 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpMessage.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpMessage.java
@@ -36,9 +36,17 @@ protected DefaultHttpMessage(final HttpVersion version) {
* Creates a new instance.
*/
protected DefaultHttpMessage(final HttpVersion version, boolean validateHeaders, boolean singleFieldHeaders) {
+ this(version,
+ singleFieldHeaders ? new CombinedHttpHeaders(validateHeaders)
+ : new DefaultHttpHeaders(validateHeaders));
+ }
+
+ /**
+ * Creates a new instance.
+ */
+ protected DefaultHttpMessage(final HttpVersion version, HttpHeaders headers) {
this.version = checkNotNull(version, "version");
- headers = singleFieldHeaders ? new CombinedHttpHeaders(validateHeaders)
- : new DefaultHttpHeaders(validateHeaders);
+ this.headers = checkNotNull(headers, "headers");
}
@Override
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpRequest.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpRequest.java
index 7844688d35f..84be3bb72c9 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpRequest.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpRequest.java
@@ -29,7 +29,7 @@ public class DefaultHttpRequest extends DefaultHttpMessage implements HttpReques
* Creates a new instance.
*
* @param httpVersion the HTTP version of the request
- * @param method the HTTP getMethod of the request
+ * @param method the HTTP method of the request
* @param uri the URI or path of the request
*/
public DefaultHttpRequest(HttpVersion httpVersion, HttpMethod method, String uri) {
@@ -40,7 +40,7 @@ public DefaultHttpRequest(HttpVersion httpVersion, HttpMethod method, String uri
* Creates a new instance.
*
* @param httpVersion the HTTP version of the request
- * @param method the HTTP getMethod of the request
+ * @param method the HTTP method of the request
* @param uri the URI or path of the request
* @param validateHeaders validate the header names and values when adding them to the {@link HttpHeaders}
*/
@@ -50,6 +50,20 @@ public DefaultHttpRequest(HttpVersion httpVersion, HttpMethod method, String uri
this.uri = checkNotNull(uri, "uri");
}
+ /**
+ * Creates a new instance.
+ *
+ * @param httpVersion the HTTP version of the request
+ * @param method the HTTP method of the request
+ * @param uri the URI or path of the request
+ * @param headers the Headers for this Request
+ */
+ public DefaultHttpRequest(HttpVersion httpVersion, HttpMethod method, String uri, HttpHeaders headers) {
+ super(httpVersion, headers);
+ this.method = checkNotNull(method, "method");
+ this.uri = checkNotNull(uri, "uri");
+ }
+
@Override
@Deprecated
public HttpMethod getMethod() {
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpResponse.java b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpResponse.java
index 4ee54a3f3f1..86858108a27 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpResponse.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpResponse.java
@@ -15,6 +15,7 @@
*/
package io.netty.handler.codec.http;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
/**
* The default {@link HttpResponse} implementation.
@@ -27,7 +28,7 @@ public class DefaultHttpResponse extends DefaultHttpMessage implements HttpRespo
* Creates a new instance.
*
* @param version the HTTP version of this response
- * @param status the getStatus of this response
+ * @param status the status of this response
*/
public DefaultHttpResponse(HttpVersion version, HttpResponseStatus status) {
this(version, status, true, false);
@@ -37,7 +38,7 @@ public DefaultHttpResponse(HttpVersion version, HttpResponseStatus status) {
* Creates a new instance.
*
* @param version the HTTP version of this response
- * @param status the getStatus of this response
+ * @param status the status of this response
* @param validateHeaders validate the header names and values when adding them to the {@link HttpHeaders}
*/
public DefaultHttpResponse(HttpVersion version, HttpResponseStatus status, boolean validateHeaders) {
@@ -48,7 +49,7 @@ public DefaultHttpResponse(HttpVersion version, HttpResponseStatus status, boole
* Creates a new instance.
*
* @param version the HTTP version of this response
- * @param status the getStatus of this response
+ * @param status the status of this response
* @param validateHeaders validate the header names and values when adding them to the {@link HttpHeaders}
* @param singleFieldHeaders {@code true} to check and enforce that headers with the same name are appended
* to the same entry and comma separated.
@@ -59,10 +60,19 @@ public DefaultHttpResponse(HttpVersion version, HttpResponseStatus status, boole
public DefaultHttpResponse(HttpVersion version, HttpResponseStatus status, boolean validateHeaders,
boolean singleFieldHeaders) {
super(version, validateHeaders, singleFieldHeaders);
- if (status == null) {
- throw new NullPointerException("status");
- }
- this.status = status;
+ this.status = checkNotNull(status, "status");
+ }
+
+ /**
+ * Creates a new instance.
+ *
+ * @param version the HTTP version of this response
+ * @param status the status of this response
+ * @param headers the headers for this HTTP Response
+ */
+ public DefaultHttpResponse(HttpVersion version, HttpResponseStatus status, HttpHeaders headers) {
+ super(version, headers);
+ this.status = checkNotNull(status, "status");
}
@Override
| null | test | train | 2015-12-30T18:31:55 | 2015-10-05T15:36:23Z | slandelle | val |
netty/netty/4679_4681 | netty/netty | netty/netty/4679 | netty/netty/4681 | [
"timestamp(timedelta=38.0, similarity=0.9066869632289366)"
] | 5c05629da1bc9022a053a7dce697a1b33b81bfd4 | 9c795965435da0441f26d990db7694ebd8868d7c | [
"will have a look\n",
"Fixed in #4681\n",
"@valodzka could you test with latest 4.0 branch: \n\nThis test works for me:\n\n```\n\n @Test\n public void testToStringComposite() {\n CompositeByteBuf cBuff = compositeBuffer();\n cBuff.addComponent(buffer(8).writeLong(0));\n cBuff.addCompo... | [
"Can you use a more descriptive test name? something like `testToStringDoesNotThrowIndexOutOfBounds`\n"
] | 2016-01-09T00:03:34Z | [
"defect"
] | IndexOutOfBoundsException for CompositeByteBuf | This code worked up to netty 4.0.32, but stopped in 4.0.33.Final
```
ByteBufAllocator alloc = ByteBufAllocator.DEFAULT;
CompositeByteBuf cBuff = alloc.compositeDirectBuffer();
cBuff.addComponent(alloc.buffer(8).writeLong(0));
cBuff.addComponent(alloc.buffer(8).writeLong(0));
// work as expected
System.err.println("str="+ cBuff.toString(0, 16, StandardCharsets.UTF_8));
// fails
System.err.println("str="+ cBuff.toString(3, 13, StandardCharsets.UTF_8));
```
Exception:
```
Exception in thread "main" java.lang.IndexOutOfBoundsException: index: 3, length: 13 (expected: range(0, 13))
at io.netty.buffer.AbstractByteBuf.checkIndex0(AbstractByteBuf.java:1142)
at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1137)
at io.netty.buffer.UnpooledHeapByteBuf.internalNioBuffer(UnpooledHeapByteBuf.java:280)
at io.netty.buffer.ByteBufUtil.decodeString(ByteBufUtil.java:558)
at io.netty.buffer.AbstractByteBuf.toString(AbstractByteBuf.java:978)
at Tester.main(Tester.java:24)
```
| [
"buffer/src/main/java/io/netty/buffer/ByteBufUtil.java"
] | [
"buffer/src/main/java/io/netty/buffer/ByteBufUtil.java"
] | [
"buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java"
] | diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
index 1644ab3165f..44fc52eaa6e 100644
--- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
+++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
@@ -548,7 +548,7 @@ static String decodeString(ByteBuf src, int readerIndex, int len, Charset charse
try {
buffer.writeBytes(src, readerIndex, len);
// Use internalNioBuffer(...) to reduce object creation.
- decodeString(decoder, buffer.internalNioBuffer(readerIndex, len), dst);
+ decodeString(decoder, buffer.internalNioBuffer(0, len), dst);
} finally {
// Release the temporary buffer again.
buffer.release();
| diff --git a/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java b/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java
index 0309db57ec3..f3bcebef184 100644
--- a/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java
+++ b/buffer/src/test/java/io/netty/buffer/ByteBufUtilTest.java
@@ -187,4 +187,17 @@ private static void testDecodeString(String text, Charset charset) {
Assert.assertEquals(text, ByteBufUtil.decodeString(buffer, 0, buffer.readableBytes(), charset));
buffer.release();
}
+
+ @Test
+ public void testToStringDoesNotThrowIndexOutOfBounds() {
+ CompositeByteBuf buffer = Unpooled.compositeBuffer();
+ try {
+ byte[] bytes = "1234".getBytes(CharsetUtil.UTF_8);
+ buffer.addComponent(Unpooled.buffer(bytes.length).writeBytes(bytes));
+ buffer.addComponent(Unpooled.buffer(bytes.length).writeBytes(bytes));
+ Assert.assertEquals("1234", buffer.toString(bytes.length, bytes.length, CharsetUtil.UTF_8));
+ } finally {
+ buffer.release();
+ }
+ }
}
| train | train | 2016-01-09T03:12:31 | 2016-01-08T21:31:03Z | valodzka | val |
netty/netty/4677_4683 | netty/netty | netty/netty/4677 | netty/netty/4683 | [
"timestamp(timedelta=96943.0, similarity=0.8577363022834855)"
] | 6fe0db4001ebb9801705f6f3ca8911ff5fd5fe1e | cc873fa6c998ec76a00faa3d021b4f44aa71fd1b | [
"> the DefaultFullHttpResponse implements LastHttpContent, after encode any of DefaultFullHttpResponse msg, HttpObjectEncoder#encodeChunkedContent will reset state to ST_INIT, then rest msg can not be encoded.\n\nI think this is the expected behaviour since it's a completed http message.\n",
"in `HttpStaticFileSe... | [] | 2016-01-09T07:57:22Z | [] | HttpObjectEncoder can not encode chunked response | I'm tring HttpStaticFileServer with netty `5.0.0.Alpha2`, but not work, and I find that has two issues.
1. the `DefaultFullHttpResponse` implements `LastHttpContent`, after encode any of `DefaultFullHttpResponse` msg, `HttpObjectEncoder#encodeChunkedContent` will reset `state` to `ST_INIT`, then rest msg can not be encoded.
``` java
private void encodeChunkedContent(ChannelHandlerContext ctx, Object msg, long contentLength, List<Object> out) {
...
if (msg instanceof LastHttpContent) {
...
state = ST_INIT; // Line 171
} else {
...
}
...
}
```
2. the sample code not set `transfer-encoding` header to `chunked`. in HttpObjectEncoder will process as `chunked` response only the response msg has `chunked` header.
``` java
protected void encode(ChannelHandlerContext ctx, Object msg, List<Object> out) throws Exception {
if (msg instanceof HttpMessage) {
...
state = HttpHeaderUtil.isTransferEncodingChunked(m) ? ST_CONTENT_CHUNK : ST_CONTENT_NON_CHUNK; // Line 75
...
}
}
```
| [
"example/src/main/java/io/netty/example/http/file/HttpStaticFileServerHandler.java"
] | [
"example/src/main/java/io/netty/example/http/file/HttpStaticFileServerHandler.java"
] | [] | diff --git a/example/src/main/java/io/netty/example/http/file/HttpStaticFileServerHandler.java b/example/src/main/java/io/netty/example/http/file/HttpStaticFileServerHandler.java
index b2a4f33b3b0..f40962f720b 100644
--- a/example/src/main/java/io/netty/example/http/file/HttpStaticFileServerHandler.java
+++ b/example/src/main/java/io/netty/example/http/file/HttpStaticFileServerHandler.java
@@ -182,18 +182,22 @@ public void channelRead0(ChannelHandlerContext ctx, FullHttpRequest request) thr
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE);
}
- // Write the initial line and the header.
- ctx.write(response);
-
// Write the content.
ChannelFuture sendFileFuture;
ChannelFuture lastContentFuture;
if (ctx.pipeline().get(SslHandler.class) == null) {
+ // Write the initial line and the header.
+ ctx.write(response);
+ // Write the content.
sendFileFuture =
ctx.write(new DefaultFileRegion(raf.getChannel(), 0, fileLength), ctx.newProgressivePromise());
// Write the end marker.
lastContentFuture = ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT);
} else {
+ HttpUtil.setTransferEncodingChunked(response, true);
+ // Write the initial line and the header.
+ ctx.write(response);
+ // Write the content.
sendFileFuture =
ctx.writeAndFlush(new HttpChunkedInput(new ChunkedFile(raf, 0, fileLength, 8192)),
ctx.newProgressivePromise());
| null | train | train | 2016-01-09T04:11:57 | 2016-01-08T15:04:37Z | cpf624 | val |
netty/netty/4017_4713 | netty/netty | netty/netty/4017 | netty/netty/4713 | [
"timestamp(timedelta=38.0, similarity=0.8717868222015914)"
] | 4d854cc1496ab3397cad543e0a0557efe1106f0c | 84d3ad5b6f22654ca431b7a8a87ce278df7f7875 | [
"@normanmaurer @trustin - Any thoughts?\n",
"+1 thanks for creating @Scottmitch \n",
"@Scottmitch +1\n",
"@Scottmitch +1\n",
"@nmittler - Looks like we have our answer :) I'll take the assignment for now so it doesn't get dropped...feel free to take it though.\n",
"@Scottmitch FYI this isn't a super high... | [
"Can the level change? If not, would it be make sense to save off some sort of allocator for the leak aware buffers?\n",
"Yes it can change\n",
"Since you're sharing this now, would it make sense to move this to some sort of common util class? It seems to be the only thing that uses `ACQUIRE_AND_RELEASE_ONLY`, ... | 2016-01-14T19:10:34Z | [
"improvement",
"feature"
] | CompositeByteBuf leak detector design pattern different from other ByteBuf implementations | The `CompositeByteBuf` implementation has a `ResourceLeak` member variable and other `ByteBuf` implementations rely on the `ByteBufAllocator` (particularly `AbstractByteBufAllocator`) to wrap the object and provide simple/advanced levels of leak detection. Is there a reason why `CompositeByteBuf` needs to be different? On the surface it seems like the preferred approach is have `CompositeByteBuf` to remain consistent with the other `ByteBuf`implementations. This provides separation of concerns and also avoids maintaining an extra member in `CompositeByteBuf` which may not be used.
| [
"buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java",
"buffer/src/main/java/io/netty/buffer/AbstractReferenceCountedByteBuf.java",
"buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java",
"buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java",
"buffer/src/main/java/io/nett... | [
"buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java",
"buffer/src/main/java/io/netty/buffer/AbstractReferenceCountedByteBuf.java",
"buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java",
"buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java",
"buffer/src/... | [] | diff --git a/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java b/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java
index 7cb54c1365a..50d82e2512d 100644
--- a/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java
+++ b/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java
@@ -26,7 +26,7 @@
*/
public abstract class AbstractByteBufAllocator implements ByteBufAllocator {
private static final int DEFAULT_INITIAL_CAPACITY = 256;
- private static final int DEFAULT_MAX_COMPONENTS = 16;
+ static final int DEFAULT_MAX_COMPONENTS = 16;
protected static ByteBuf toLeakAwareBuffer(ByteBuf buf) {
ResourceLeak leak;
@@ -50,6 +50,28 @@ protected static ByteBuf toLeakAwareBuffer(ByteBuf buf) {
return buf;
}
+ protected static CompositeByteBuf toLeakAwareBuffer(CompositeByteBuf buf) {
+ ResourceLeak leak;
+ switch (ResourceLeakDetector.getLevel()) {
+ case SIMPLE:
+ leak = AbstractByteBuf.leakDetector.open(buf);
+ if (leak != null) {
+ buf = new SimpleLeakAwareCompositeByteBuf(buf, leak);
+ }
+ break;
+ case ADVANCED:
+ case PARANOID:
+ leak = AbstractByteBuf.leakDetector.open(buf);
+ if (leak != null) {
+ buf = new AdvancedLeakAwareCompositeByteBuf(buf, leak);
+ }
+ break;
+ default:
+ break;
+ }
+ return buf;
+ }
+
private final boolean directByDefault;
private final ByteBuf emptyBuf;
@@ -180,7 +202,7 @@ public CompositeByteBuf compositeHeapBuffer() {
@Override
public CompositeByteBuf compositeHeapBuffer(int maxNumComponents) {
- return new CompositeByteBuf(this, false, maxNumComponents);
+ return toLeakAwareBuffer(new CompositeByteBuf(this, false, maxNumComponents));
}
@Override
@@ -190,7 +212,7 @@ public CompositeByteBuf compositeDirectBuffer() {
@Override
public CompositeByteBuf compositeDirectBuffer(int maxNumComponents) {
- return new CompositeByteBuf(this, true, maxNumComponents);
+ return toLeakAwareBuffer(new CompositeByteBuf(this, true, maxNumComponents));
}
private static void validate(int initialCapacity, int maxCapacity) {
diff --git a/buffer/src/main/java/io/netty/buffer/AbstractReferenceCountedByteBuf.java b/buffer/src/main/java/io/netty/buffer/AbstractReferenceCountedByteBuf.java
index c3e1f5a08b2..ff9ef25a4fb 100644
--- a/buffer/src/main/java/io/netty/buffer/AbstractReferenceCountedByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/AbstractReferenceCountedByteBuf.java
@@ -44,7 +44,7 @@ protected AbstractReferenceCountedByteBuf(int maxCapacity) {
}
@Override
- public final int refCnt() {
+ public int refCnt() {
return refCnt;
}
@@ -104,7 +104,7 @@ public ByteBuf touch(Object hint) {
}
@Override
- public final boolean release() {
+ public boolean release() {
for (;;) {
int refCnt = this.refCnt;
if (refCnt == 0) {
@@ -122,7 +122,7 @@ public final boolean release() {
}
@Override
- public final boolean release(int decrement) {
+ public boolean release(int decrement) {
if (decrement <= 0) {
throw new IllegalArgumentException("decrement: " + decrement + " (expected: > 0)");
}
diff --git a/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java
index ed5b5c089c2..5e696f65d03 100644
--- a/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java
@@ -53,7 +53,7 @@ final class AdvancedLeakAwareByteBuf extends WrappedByteBuf {
this.leak = leak;
}
- private void recordLeakNonRefCountingOperation() {
+ static void recordLeakNonRefCountingOperation(ResourceLeak leak) {
if (!ACQUIRE_AND_RELEASE_ONLY) {
leak.record();
}
@@ -61,7 +61,7 @@ private void recordLeakNonRefCountingOperation() {
@Override
public ByteBuf order(ByteOrder endianness) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
if (order() == endianness) {
return this;
} else {
@@ -71,775 +71,775 @@ public ByteBuf order(ByteOrder endianness) {
@Override
public ByteBuf slice() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return new AdvancedLeakAwareByteBuf(super.slice(), leak);
}
@Override
public ByteBuf slice(int index, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return new AdvancedLeakAwareByteBuf(super.slice(index, length), leak);
}
@Override
public ByteBuf duplicate() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return new AdvancedLeakAwareByteBuf(super.duplicate(), leak);
}
@Override
public ByteBuf readSlice(int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return new AdvancedLeakAwareByteBuf(super.readSlice(length), leak);
}
@Override
public ByteBuf discardReadBytes() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.discardReadBytes();
}
@Override
public ByteBuf discardSomeReadBytes() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.discardSomeReadBytes();
}
@Override
public ByteBuf ensureWritable(int minWritableBytes) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.ensureWritable(minWritableBytes);
}
@Override
public int ensureWritable(int minWritableBytes, boolean force) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.ensureWritable(minWritableBytes, force);
}
@Override
public boolean getBoolean(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBoolean(index);
}
@Override
public byte getByte(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getByte(index);
}
@Override
public short getUnsignedByte(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getUnsignedByte(index);
}
@Override
public short getShort(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getShort(index);
}
@Override
public int getUnsignedShort(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getUnsignedShort(index);
}
@Override
public int getMedium(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getMedium(index);
}
@Override
public int getUnsignedMedium(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getUnsignedMedium(index);
}
@Override
public int getInt(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getInt(index);
}
@Override
public long getUnsignedInt(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getUnsignedInt(index);
}
@Override
public long getLong(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getLong(index);
}
@Override
public char getChar(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getChar(index);
}
@Override
public float getFloat(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getFloat(index);
}
@Override
public double getDouble(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getDouble(index);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst, length);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst, dstIndex, length);
}
@Override
public ByteBuf getBytes(int index, byte[] dst) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst);
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst, dstIndex, length);
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst);
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, out, length);
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, out, length);
}
@Override
public ByteBuf setBoolean(int index, boolean value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBoolean(index, value);
}
@Override
public ByteBuf setByte(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setByte(index, value);
}
@Override
public ByteBuf setShort(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setShort(index, value);
}
@Override
public ByteBuf setMedium(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setMedium(index, value);
}
@Override
public ByteBuf setInt(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setInt(index, value);
}
@Override
public ByteBuf setLong(int index, long value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setLong(index, value);
}
@Override
public ByteBuf setChar(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setChar(index, value);
}
@Override
public ByteBuf setFloat(int index, float value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setFloat(index, value);
}
@Override
public ByteBuf setDouble(int index, double value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setDouble(index, value);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src, length);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src, srcIndex, length);
}
@Override
public ByteBuf setBytes(int index, byte[] src) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src);
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src, srcIndex, length);
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src);
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, in, length);
}
@Override
public ByteBuf setZero(int index, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setZero(index, length);
}
@Override
public boolean readBoolean() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBoolean();
}
@Override
public byte readByte() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readByte();
}
@Override
public short readUnsignedByte() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readUnsignedByte();
}
@Override
public short readShort() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readShort();
}
@Override
public int readUnsignedShort() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readUnsignedShort();
}
@Override
public int readMedium() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readMedium();
}
@Override
public int readUnsignedMedium() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readUnsignedMedium();
}
@Override
public int readInt() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readInt();
}
@Override
public long readUnsignedInt() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readUnsignedInt();
}
@Override
public long readLong() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readLong();
}
@Override
public char readChar() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readChar();
}
@Override
public float readFloat() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readFloat();
}
@Override
public double readDouble() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readDouble();
}
@Override
public ByteBuf readBytes(int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(length);
}
@Override
public ByteBuf readBytes(ByteBuf dst) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst);
}
@Override
public ByteBuf readBytes(ByteBuf dst, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst, length);
}
@Override
public ByteBuf readBytes(ByteBuf dst, int dstIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst, dstIndex, length);
}
@Override
public ByteBuf readBytes(byte[] dst) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst);
}
@Override
public ByteBuf readBytes(byte[] dst, int dstIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst, dstIndex, length);
}
@Override
public ByteBuf readBytes(ByteBuffer dst) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst);
}
@Override
public ByteBuf readBytes(OutputStream out, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(out, length);
}
@Override
public int readBytes(GatheringByteChannel out, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readBytes(out, length);
}
@Override
public ByteBuf skipBytes(int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.skipBytes(length);
}
@Override
public ByteBuf writeBoolean(boolean value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBoolean(value);
}
@Override
public ByteBuf writeByte(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeByte(value);
}
@Override
public ByteBuf writeShort(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeShort(value);
}
@Override
public ByteBuf writeMedium(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeMedium(value);
}
@Override
public ByteBuf writeInt(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeInt(value);
}
@Override
public ByteBuf writeLong(long value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeLong(value);
}
@Override
public ByteBuf writeChar(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeChar(value);
}
@Override
public ByteBuf writeFloat(float value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeFloat(value);
}
@Override
public ByteBuf writeDouble(double value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeDouble(value);
}
@Override
public ByteBuf writeBytes(ByteBuf src) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src);
}
@Override
public ByteBuf writeBytes(ByteBuf src, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src, length);
}
@Override
public ByteBuf writeBytes(ByteBuf src, int srcIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src, srcIndex, length);
}
@Override
public ByteBuf writeBytes(byte[] src) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src);
}
@Override
public ByteBuf writeBytes(byte[] src, int srcIndex, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src, srcIndex, length);
}
@Override
public ByteBuf writeBytes(ByteBuffer src) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src);
}
@Override
public int writeBytes(InputStream in, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(in, length);
}
@Override
public int writeBytes(ScatteringByteChannel in, int length) throws IOException {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeBytes(in, length);
}
@Override
public ByteBuf writeZero(int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeZero(length);
}
@Override
public int indexOf(int fromIndex, int toIndex, byte value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.indexOf(fromIndex, toIndex, value);
}
@Override
public int bytesBefore(byte value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.bytesBefore(value);
}
@Override
public int bytesBefore(int length, byte value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.bytesBefore(length, value);
}
@Override
public int bytesBefore(int index, int length, byte value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.bytesBefore(index, length, value);
}
@Override
public int forEachByte(ByteProcessor processor) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.forEachByte(processor);
}
@Override
public int forEachByte(int index, int length, ByteProcessor processor) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.forEachByte(index, length, processor);
}
@Override
public int forEachByteDesc(ByteProcessor processor) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.forEachByteDesc(processor);
}
@Override
public int forEachByteDesc(int index, int length, ByteProcessor processor) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.forEachByteDesc(index, length, processor);
}
@Override
public ByteBuf copy() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.copy();
}
@Override
public ByteBuf copy(int index, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.copy(index, length);
}
@Override
public int nioBufferCount() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.nioBufferCount();
}
@Override
public ByteBuffer nioBuffer() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.nioBuffer();
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.nioBuffer(index, length);
}
@Override
public ByteBuffer[] nioBuffers() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.nioBuffers();
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.nioBuffers(index, length);
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.internalNioBuffer(index, length);
}
@Override
public String toString(Charset charset) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.toString(charset);
}
@Override
public String toString(int index, int length, Charset charset) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.toString(index, length, charset);
}
@Override
public ByteBuf capacity(int newCapacity) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.capacity(newCapacity);
}
@Override
public short getShortLE(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getShortLE(index);
}
@Override
public int getUnsignedShortLE(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getUnsignedShortLE(index);
}
@Override
public int getMediumLE(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getMediumLE(index);
}
@Override
public int getUnsignedMediumLE(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getUnsignedMediumLE(index);
}
@Override
public int getIntLE(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getIntLE(index);
}
@Override
public long getUnsignedIntLE(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getUnsignedIntLE(index);
}
@Override
public long getLongLE(int index) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.getLongLE(index);
}
@Override
public ByteBuf setShortLE(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setShortLE(index, value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setIntLE(index, value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setMediumLE(index, value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.setLongLE(index, value);
}
@Override
public short readShortLE() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readShortLE();
}
@Override
public int readUnsignedShortLE() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readUnsignedShortLE();
}
@Override
public int readMediumLE() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readMediumLE();
}
@Override
public int readUnsignedMediumLE() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readUnsignedMediumLE();
}
@Override
public int readIntLE() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readIntLE();
}
@Override
public long readUnsignedIntLE() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readUnsignedIntLE();
}
@Override
public long readLongLE() {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.readLongLE();
}
@Override
public ByteBuf writeShortLE(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeShortLE(value);
}
@Override
public ByteBuf writeMediumLE(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeMediumLE(value);
}
@Override
public ByteBuf writeIntLE(int value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeIntLE(value);
}
@Override
public ByteBuf writeLongLE(long value) {
- recordLeakNonRefCountingOperation();
+ recordLeakNonRefCountingOperation(leak);
return super.writeLongLE(value);
}
diff --git a/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java
new file mode 100644
index 00000000000..284bb881d11
--- /dev/null
+++ b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java
@@ -0,0 +1,951 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.buffer;
+
+
+import io.netty.util.ByteProcessor;
+import io.netty.util.ResourceLeak;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.nio.channels.GatheringByteChannel;
+import java.nio.channels.ScatteringByteChannel;
+import java.nio.charset.Charset;
+import java.util.Iterator;
+import java.util.List;
+
+import static io.netty.buffer.AdvancedLeakAwareByteBuf.recordLeakNonRefCountingOperation;
+
+final class AdvancedLeakAwareCompositeByteBuf extends WrappedCompositeByteBuf {
+
+ private final ResourceLeak leak;
+
+ AdvancedLeakAwareCompositeByteBuf(CompositeByteBuf wrapped, ResourceLeak leak) {
+ super(wrapped);
+ this.leak = leak;
+ }
+
+ @Override
+ public ByteBuf order(ByteOrder endianness) {
+ recordLeakNonRefCountingOperation(leak);
+ if (order() == endianness) {
+ return this;
+ } else {
+ return new AdvancedLeakAwareByteBuf(super.order(endianness), leak);
+ }
+ }
+
+ @Override
+ public ByteBuf slice() {
+ recordLeakNonRefCountingOperation(leak);
+ return new AdvancedLeakAwareByteBuf(super.slice(), leak);
+ }
+
+ @Override
+ public ByteBuf slice(int index, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return new AdvancedLeakAwareByteBuf(super.slice(index, length), leak);
+ }
+
+ @Override
+ public ByteBuf duplicate() {
+ recordLeakNonRefCountingOperation(leak);
+ return new AdvancedLeakAwareByteBuf(super.duplicate(), leak);
+ }
+
+ @Override
+ public ByteBuf readSlice(int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return new AdvancedLeakAwareByteBuf(super.readSlice(length), leak);
+ }
+
+ @Override
+ public CompositeByteBuf discardReadBytes() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.discardReadBytes();
+ }
+
+ @Override
+ public CompositeByteBuf discardSomeReadBytes() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.discardSomeReadBytes();
+ }
+
+ @Override
+ public CompositeByteBuf ensureWritable(int minWritableBytes) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.ensureWritable(minWritableBytes);
+ }
+
+ @Override
+ public int ensureWritable(int minWritableBytes, boolean force) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.ensureWritable(minWritableBytes, force);
+ }
+
+ @Override
+ public boolean getBoolean(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBoolean(index);
+ }
+
+ @Override
+ public byte getByte(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getByte(index);
+ }
+
+ @Override
+ public short getUnsignedByte(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getUnsignedByte(index);
+ }
+
+ @Override
+ public short getShort(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getShort(index);
+ }
+
+ @Override
+ public int getUnsignedShort(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getUnsignedShort(index);
+ }
+
+ @Override
+ public int getMedium(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getMedium(index);
+ }
+
+ @Override
+ public int getUnsignedMedium(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getUnsignedMedium(index);
+ }
+
+ @Override
+ public int getInt(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getInt(index);
+ }
+
+ @Override
+ public long getUnsignedInt(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getUnsignedInt(index);
+ }
+
+ @Override
+ public long getLong(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getLong(index);
+ }
+
+ @Override
+ public char getChar(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getChar(index);
+ }
+
+ @Override
+ public float getFloat(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getFloat(index);
+ }
+
+ @Override
+ public double getDouble(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getDouble(index);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuf dst) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, dst);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuf dst, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, dst, length);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, dst, dstIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, byte[] dst) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, dst);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, dst, dstIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuffer dst) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, dst);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, out, length);
+ }
+
+ @Override
+ public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getBytes(index, out, length);
+ }
+
+ @Override
+ public CompositeByteBuf setBoolean(int index, boolean value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBoolean(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setByte(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setByte(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setShort(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setShort(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setMedium(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setMedium(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setInt(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setInt(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setLong(int index, long value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setLong(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setChar(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setChar(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setFloat(int index, float value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setFloat(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setDouble(int index, double value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setDouble(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuf src) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, src);
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuf src, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, src, length);
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, src, srcIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, byte[] src) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, src);
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, src, srcIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuffer src) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, src);
+ }
+
+ @Override
+ public int setBytes(int index, InputStream in, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, in, length);
+ }
+
+ @Override
+ public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setBytes(index, in, length);
+ }
+
+ @Override
+ public CompositeByteBuf setZero(int index, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setZero(index, length);
+ }
+
+ @Override
+ public boolean readBoolean() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBoolean();
+ }
+
+ @Override
+ public byte readByte() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readByte();
+ }
+
+ @Override
+ public short readUnsignedByte() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readUnsignedByte();
+ }
+
+ @Override
+ public short readShort() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readShort();
+ }
+
+ @Override
+ public int readUnsignedShort() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readUnsignedShort();
+ }
+
+ @Override
+ public int readMedium() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readMedium();
+ }
+
+ @Override
+ public int readUnsignedMedium() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readUnsignedMedium();
+ }
+
+ @Override
+ public int readInt() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readInt();
+ }
+
+ @Override
+ public long readUnsignedInt() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readUnsignedInt();
+ }
+
+ @Override
+ public long readLong() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readLong();
+ }
+
+ @Override
+ public char readChar() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readChar();
+ }
+
+ @Override
+ public float readFloat() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readFloat();
+ }
+
+ @Override
+ public double readDouble() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readDouble();
+ }
+
+ @Override
+ public ByteBuf readBytes(int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(length);
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuf dst) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(dst);
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuf dst, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(dst, length);
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuf dst, int dstIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(dst, dstIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(byte[] dst) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(dst);
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(byte[] dst, int dstIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(dst, dstIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuffer dst) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(dst);
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(OutputStream out, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(out, length);
+ }
+
+ @Override
+ public int readBytes(GatheringByteChannel out, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readBytes(out, length);
+ }
+
+ @Override
+ public CompositeByteBuf skipBytes(int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.skipBytes(length);
+ }
+
+ @Override
+ public CompositeByteBuf writeBoolean(boolean value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBoolean(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeByte(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeByte(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeShort(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeShort(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeMedium(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeMedium(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeInt(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeInt(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeLong(long value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeLong(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeChar(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeChar(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeFloat(float value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeFloat(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeDouble(double value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeDouble(value);
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuf src) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(src);
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuf src, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(src, length);
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuf src, int srcIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(src, srcIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(byte[] src) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(src);
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(byte[] src, int srcIndex, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(src, srcIndex, length);
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuffer src) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(src);
+ }
+
+ @Override
+ public int writeBytes(InputStream in, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(in, length);
+ }
+
+ @Override
+ public int writeBytes(ScatteringByteChannel in, int length) throws IOException {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeBytes(in, length);
+ }
+
+ @Override
+ public CompositeByteBuf writeZero(int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeZero(length);
+ }
+
+ @Override
+ public int indexOf(int fromIndex, int toIndex, byte value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.indexOf(fromIndex, toIndex, value);
+ }
+
+ @Override
+ public int bytesBefore(byte value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.bytesBefore(value);
+ }
+
+ @Override
+ public int bytesBefore(int length, byte value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.bytesBefore(length, value);
+ }
+
+ @Override
+ public int bytesBefore(int index, int length, byte value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.bytesBefore(index, length, value);
+ }
+
+ @Override
+ public int forEachByte(ByteProcessor processor) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.forEachByte(processor);
+ }
+
+ @Override
+ public int forEachByte(int index, int length, ByteProcessor processor) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.forEachByte(index, length, processor);
+ }
+
+ @Override
+ public int forEachByteDesc(ByteProcessor processor) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.forEachByteDesc(processor);
+ }
+
+ @Override
+ public int forEachByteDesc(int index, int length, ByteProcessor processor) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.forEachByteDesc(index, length, processor);
+ }
+
+ @Override
+ public ByteBuf copy() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.copy();
+ }
+
+ @Override
+ public ByteBuf copy(int index, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.copy(index, length);
+ }
+
+ @Override
+ public int nioBufferCount() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.nioBufferCount();
+ }
+
+ @Override
+ public ByteBuffer nioBuffer() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.nioBuffer();
+ }
+
+ @Override
+ public ByteBuffer nioBuffer(int index, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.nioBuffer(index, length);
+ }
+
+ @Override
+ public ByteBuffer[] nioBuffers() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.nioBuffers();
+ }
+
+ @Override
+ public ByteBuffer[] nioBuffers(int index, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.nioBuffers(index, length);
+ }
+
+ @Override
+ public ByteBuffer internalNioBuffer(int index, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.internalNioBuffer(index, length);
+ }
+
+ @Override
+ public String toString(Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.toString(charset);
+ }
+
+ @Override
+ public String toString(int index, int length, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.toString(index, length, charset);
+ }
+
+ @Override
+ public CompositeByteBuf capacity(int newCapacity) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.capacity(newCapacity);
+ }
+
+ @Override
+ public short getShortLE(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getShortLE(index);
+ }
+
+ @Override
+ public int getUnsignedShortLE(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getUnsignedShortLE(index);
+ }
+
+ @Override
+ public int getUnsignedMediumLE(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getUnsignedMediumLE(index);
+ }
+
+ @Override
+ public int getMediumLE(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getMediumLE(index);
+ }
+
+ @Override
+ public int getIntLE(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getIntLE(index);
+ }
+
+ @Override
+ public long getUnsignedIntLE(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getUnsignedIntLE(index);
+ }
+
+ @Override
+ public long getLongLE(int index) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getLongLE(index);
+ }
+
+ @Override
+ public ByteBuf setShortLE(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setShortLE(index, value);
+ }
+
+ @Override
+ public ByteBuf setMediumLE(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setMediumLE(index, value);
+ }
+
+ @Override
+ public ByteBuf setIntLE(int index, int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setIntLE(index, value);
+ }
+
+ @Override
+ public ByteBuf setLongLE(int index, long value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setLongLE(index, value);
+ }
+
+ @Override
+ public short readShortLE() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readShortLE();
+ }
+
+ @Override
+ public int readUnsignedShortLE() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readUnsignedShortLE();
+ }
+
+ @Override
+ public int readMediumLE() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readMediumLE();
+ }
+
+ @Override
+ public int readUnsignedMediumLE() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readUnsignedMediumLE();
+ }
+
+ @Override
+ public int readIntLE() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readIntLE();
+ }
+
+ @Override
+ public long readUnsignedIntLE() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readUnsignedIntLE();
+ }
+
+ @Override
+ public long readLongLE() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readLongLE();
+ }
+
+ @Override
+ public ByteBuf writeShortLE(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeShortLE(value);
+ }
+
+ @Override
+ public ByteBuf writeMediumLE(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeMediumLE(value);
+ }
+
+ @Override
+ public ByteBuf writeIntLE(int value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeIntLE(value);
+ }
+
+ @Override
+ public ByteBuf writeLongLE(long value) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeLongLE(value);
+ }
+
+ @Override
+ public CompositeByteBuf addComponent(ByteBuf buffer) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.addComponent(buffer);
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(ByteBuf... buffers) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.addComponents(buffers);
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(Iterable<ByteBuf> buffers) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.addComponents(buffers);
+ }
+
+ @Override
+ public CompositeByteBuf addComponent(int cIndex, ByteBuf buffer) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.addComponent(cIndex, buffer);
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(int cIndex, ByteBuf... buffers) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.addComponents(cIndex, buffers);
+ }
+
+ @Override
+ public CompositeByteBuf removeComponent(int cIndex) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.removeComponent(cIndex);
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(int cIndex, Iterable<ByteBuf> buffers) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.addComponents(cIndex, buffers);
+ }
+
+ @Override
+ public CompositeByteBuf removeComponents(int cIndex, int numComponents) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.removeComponents(cIndex, numComponents);
+ }
+
+ @Override
+ public Iterator<ByteBuf> iterator() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.iterator();
+ }
+
+ @Override
+ public List<ByteBuf> decompose(int offset, int length) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.decompose(offset, length);
+ }
+
+ @Override
+ public CompositeByteBuf consolidate() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.consolidate();
+ }
+
+ @Override
+ public CompositeByteBuf discardReadComponents() {
+ recordLeakNonRefCountingOperation(leak);
+ return super.discardReadComponents();
+ }
+
+ @Override
+ public CompositeByteBuf consolidate(int cIndex, int numComponents) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.consolidate(cIndex, numComponents);
+ }
+
+ @Override
+ public CompositeByteBuf retain() {
+ leak.record();
+ return super.retain();
+ }
+
+ @Override
+ public CompositeByteBuf retain(int increment) {
+ leak.record();
+ return super.retain(increment);
+ }
+
+ @Override
+ public CompositeByteBuf touch() {
+ leak.record();
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf touch(Object hint) {
+ leak.record(hint);
+ return this;
+ }
+
+ @Override
+ public boolean release() {
+ boolean deallocated = super.release();
+ if (deallocated) {
+ leak.close();
+ } else {
+ leak.record();
+ }
+ return deallocated;
+ }
+
+ @Override
+ public boolean release(int decrement) {
+ boolean deallocated = super.release(decrement);
+ if (deallocated) {
+ leak.close();
+ } else {
+ leak.record();
+ }
+ return deallocated;
+ }
+}
diff --git a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
index d48a4b475e3..262b229c327 100644
--- a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
@@ -15,7 +15,6 @@
*/
package io.netty.buffer;
-import io.netty.util.ResourceLeak;
import io.netty.util.internal.EmptyArrays;
import java.io.IOException;
@@ -44,10 +43,9 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
private static final ByteBuffer EMPTY_NIO_BUFFER = Unpooled.EMPTY_BUFFER.nioBuffer();
private static final Iterator<ByteBuf> EMPTY_ITERATOR = Collections.<ByteBuf>emptyList().iterator();
- private final ResourceLeak leak;
private final ByteBufAllocator alloc;
private final boolean direct;
- private final List<Component> components = new ArrayList<Component>();
+ private final List<Component> components;
private final int maxNumComponents;
private boolean freed;
@@ -60,7 +58,7 @@ public CompositeByteBuf(ByteBufAllocator alloc, boolean direct, int maxNumCompon
this.alloc = alloc;
this.direct = direct;
this.maxNumComponents = maxNumComponents;
- leak = leakDetector.open(this);
+ components = newList(maxNumComponents);
}
public CompositeByteBuf(ByteBufAllocator alloc, boolean direct, int maxNumComponents, ByteBuf... buffers) {
@@ -76,11 +74,11 @@ public CompositeByteBuf(ByteBufAllocator alloc, boolean direct, int maxNumCompon
this.alloc = alloc;
this.direct = direct;
this.maxNumComponents = maxNumComponents;
+ components = newList(maxNumComponents);
addComponents0(0, buffers);
consolidateIfNeeded();
setIndex(0, capacity());
- leak = leakDetector.open(this);
}
public CompositeByteBuf(
@@ -97,10 +95,24 @@ public CompositeByteBuf(
this.alloc = alloc;
this.direct = direct;
this.maxNumComponents = maxNumComponents;
+ components = newList(maxNumComponents);
+
addComponents0(0, buffers);
consolidateIfNeeded();
setIndex(0, capacity());
- leak = leakDetector.open(this);
+ }
+
+ private static List<Component> newList(int maxNumComponents) {
+ return new ArrayList<Component>(Math.min(AbstractByteBufAllocator.DEFAULT_MAX_COMPONENTS, maxNumComponents));
+ }
+
+ // Special constructor used by WrappedCompositeByteBuf
+ CompositeByteBuf(ByteBufAllocator alloc) {
+ super(Integer.MAX_VALUE);
+ this.alloc = alloc;
+ direct = false;
+ maxNumComponents = 0;
+ components = Collections.emptyList();
}
/**
@@ -1707,17 +1719,11 @@ public CompositeByteBuf retain() {
@Override
public CompositeByteBuf touch() {
- if (leak != null) {
- leak.record();
- }
return this;
}
@Override
public CompositeByteBuf touch(Object hint) {
- if (leak != null) {
- leak.record(hint);
- }
return this;
}
@@ -1744,10 +1750,6 @@ protected void deallocate() {
for (int i = 0; i < size; i++) {
components.get(i).freeIfNecessary();
}
-
- if (leak != null) {
- leak.close();
- }
}
@Override
diff --git a/buffer/src/main/java/io/netty/buffer/SimpleLeakAwareCompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/SimpleLeakAwareCompositeByteBuf.java
new file mode 100644
index 00000000000..bede44fee5d
--- /dev/null
+++ b/buffer/src/main/java/io/netty/buffer/SimpleLeakAwareCompositeByteBuf.java
@@ -0,0 +1,79 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.buffer;
+
+
+import io.netty.util.ResourceLeak;
+
+import java.nio.ByteOrder;
+
+final class SimpleLeakAwareCompositeByteBuf extends WrappedCompositeByteBuf {
+
+ private final ResourceLeak leak;
+
+ SimpleLeakAwareCompositeByteBuf(CompositeByteBuf wrapped, ResourceLeak leak) {
+ super(wrapped);
+ this.leak = leak;
+ }
+
+ @Override
+ public boolean release() {
+ boolean deallocated = super.release();
+ if (deallocated) {
+ leak.close();
+ }
+ return deallocated;
+ }
+
+ @Override
+ public boolean release(int decrement) {
+ boolean deallocated = super.release(decrement);
+ if (deallocated) {
+ leak.close();
+ }
+ return deallocated;
+ }
+
+ @Override
+ public ByteBuf order(ByteOrder endianness) {
+ leak.record();
+ if (order() == endianness) {
+ return this;
+ } else {
+ return new SimpleLeakAwareByteBuf(super.order(endianness), leak);
+ }
+ }
+
+ @Override
+ public ByteBuf slice() {
+ return new SimpleLeakAwareByteBuf(super.slice(), leak);
+ }
+
+ @Override
+ public ByteBuf slice(int index, int length) {
+ return new SimpleLeakAwareByteBuf(super.slice(index, length), leak);
+ }
+
+ @Override
+ public ByteBuf duplicate() {
+ return new SimpleLeakAwareByteBuf(super.duplicate(), leak);
+ }
+
+ @Override
+ public ByteBuf readSlice(int length) {
+ return new SimpleLeakAwareByteBuf(super.readSlice(length), leak);
+ }
+}
diff --git a/buffer/src/main/java/io/netty/buffer/Unpooled.java b/buffer/src/main/java/io/netty/buffer/Unpooled.java
index 10baa6be790..0532c0c6a11 100644
--- a/buffer/src/main/java/io/netty/buffer/Unpooled.java
+++ b/buffer/src/main/java/io/netty/buffer/Unpooled.java
@@ -230,7 +230,7 @@ public static ByteBuf wrappedBuffer(ByteBuf buffer) {
* content will be visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(byte[]... arrays) {
- return wrappedBuffer(16, arrays);
+ return wrappedBuffer(AbstractByteBufAllocator.DEFAULT_MAX_COMPONENTS, arrays);
}
/**
@@ -241,7 +241,7 @@ public static ByteBuf wrappedBuffer(byte[]... arrays) {
* @return The readable portion of the {@code buffers}. The caller is responsible for releasing this buffer.
*/
public static ByteBuf wrappedBuffer(ByteBuf... buffers) {
- return wrappedBuffer(16, buffers);
+ return wrappedBuffer(AbstractByteBufAllocator.DEFAULT_MAX_COMPONENTS, buffers);
}
/**
@@ -250,7 +250,7 @@ public static ByteBuf wrappedBuffer(ByteBuf... buffers) {
* specified buffers will be visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(ByteBuffer... buffers) {
- return wrappedBuffer(16, buffers);
+ return wrappedBuffer(AbstractByteBufAllocator.DEFAULT_MAX_COMPONENTS, buffers);
}
/**
@@ -358,7 +358,7 @@ public static ByteBuf wrappedBuffer(int maxNumComponents, ByteBuffer... buffers)
* Returns a new big-endian composite buffer with no components.
*/
public static CompositeByteBuf compositeBuffer() {
- return compositeBuffer(16);
+ return compositeBuffer(AbstractByteBufAllocator.DEFAULT_MAX_COMPONENTS);
}
/**
diff --git a/buffer/src/main/java/io/netty/buffer/WrappedCompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/WrappedCompositeByteBuf.java
new file mode 100644
index 00000000000..029d9c1372a
--- /dev/null
+++ b/buffer/src/main/java/io/netty/buffer/WrappedCompositeByteBuf.java
@@ -0,0 +1,1158 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.buffer;
+
+import io.netty.util.ByteProcessor;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.nio.channels.GatheringByteChannel;
+import java.nio.channels.ScatteringByteChannel;
+import java.nio.charset.Charset;
+import java.util.Iterator;
+import java.util.List;
+
+class WrappedCompositeByteBuf extends CompositeByteBuf {
+
+ private final CompositeByteBuf wrapped;
+
+ WrappedCompositeByteBuf(CompositeByteBuf wrapped) {
+ super(wrapped.alloc());
+ this.wrapped = wrapped;
+ }
+
+ @Override
+ public boolean release() {
+ return wrapped.release();
+ }
+
+ @Override
+ public boolean release(int decrement) {
+ return wrapped.release(decrement);
+ }
+
+ @Override
+ public final int maxCapacity() {
+ return wrapped.maxCapacity();
+ }
+
+ @Override
+ public final int readerIndex() {
+ return wrapped.readerIndex();
+ }
+
+ @Override
+ public final int writerIndex() {
+ return wrapped.writerIndex();
+ }
+
+ @Override
+ public final boolean isReadable() {
+ return wrapped.isReadable();
+ }
+
+ @Override
+ public final boolean isReadable(int numBytes) {
+ return wrapped.isReadable(numBytes);
+ }
+
+ @Override
+ public final boolean isWritable() {
+ return wrapped.isWritable();
+ }
+
+ @Override
+ public final boolean isWritable(int numBytes) {
+ return wrapped.isWritable(numBytes);
+ }
+
+ @Override
+ public final int readableBytes() {
+ return wrapped.readableBytes();
+ }
+
+ @Override
+ public final int writableBytes() {
+ return wrapped.writableBytes();
+ }
+
+ @Override
+ public final int maxWritableBytes() {
+ return wrapped.maxWritableBytes();
+ }
+
+ @Override
+ public int ensureWritable(int minWritableBytes, boolean force) {
+ return wrapped.ensureWritable(minWritableBytes, force);
+ }
+
+ @Override
+ public ByteBuf order(ByteOrder endianness) {
+ return wrapped.order(endianness);
+ }
+
+ @Override
+ public boolean getBoolean(int index) {
+ return wrapped.getBoolean(index);
+ }
+
+ @Override
+ public short getUnsignedByte(int index) {
+ return wrapped.getUnsignedByte(index);
+ }
+
+ @Override
+ public short getShort(int index) {
+ return wrapped.getShort(index);
+ }
+
+ @Override
+ public short getShortLE(int index) {
+ return wrapped.getShortLE(index);
+ }
+
+ @Override
+ public int getUnsignedShort(int index) {
+ return wrapped.getUnsignedShort(index);
+ }
+
+ @Override
+ public int getUnsignedShortLE(int index) {
+ return wrapped.getUnsignedShortLE(index);
+ }
+
+ @Override
+ public int getUnsignedMedium(int index) {
+ return wrapped.getUnsignedMedium(index);
+ }
+
+ @Override
+ public int getUnsignedMediumLE(int index) {
+ return wrapped.getUnsignedMediumLE(index);
+ }
+
+ @Override
+ public int getMedium(int index) {
+ return wrapped.getMedium(index);
+ }
+
+ @Override
+ public int getMediumLE(int index) {
+ return wrapped.getMediumLE(index);
+ }
+
+ @Override
+ public int getInt(int index) {
+ return wrapped.getInt(index);
+ }
+
+ @Override
+ public int getIntLE(int index) {
+ return wrapped.getIntLE(index);
+ }
+
+ @Override
+ public long getUnsignedInt(int index) {
+ return wrapped.getUnsignedInt(index);
+ }
+
+ @Override
+ public long getUnsignedIntLE(int index) {
+ return wrapped.getUnsignedIntLE(index);
+ }
+
+ @Override
+ public long getLong(int index) {
+ return wrapped.getLong(index);
+ }
+
+ @Override
+ public long getLongLE(int index) {
+ return wrapped.getLongLE(index);
+ }
+
+ @Override
+ public char getChar(int index) {
+ return wrapped.getChar(index);
+ }
+
+ @Override
+ public float getFloat(int index) {
+ return wrapped.getFloat(index);
+ }
+
+ @Override
+ public double getDouble(int index) {
+ return wrapped.getDouble(index);
+ }
+
+ @Override
+ public ByteBuf setShortLE(int index, int value) {
+ return wrapped.setShortLE(index, value);
+ }
+
+ @Override
+ public ByteBuf setMediumLE(int index, int value) {
+ return wrapped.setMediumLE(index, value);
+ }
+
+ @Override
+ public ByteBuf setIntLE(int index, int value) {
+ return wrapped.setIntLE(index, value);
+ }
+
+ @Override
+ public ByteBuf setLongLE(int index, long value) {
+ return wrapped.setLongLE(index, value);
+ }
+
+ @Override
+ public byte readByte() {
+ return wrapped.readByte();
+ }
+
+ @Override
+ public boolean readBoolean() {
+ return wrapped.readBoolean();
+ }
+
+ @Override
+ public short readUnsignedByte() {
+ return wrapped.readUnsignedByte();
+ }
+
+ @Override
+ public short readShort() {
+ return wrapped.readShort();
+ }
+
+ @Override
+ public short readShortLE() {
+ return wrapped.readShortLE();
+ }
+
+ @Override
+ public int readUnsignedShort() {
+ return wrapped.readUnsignedShort();
+ }
+
+ @Override
+ public int readUnsignedShortLE() {
+ return wrapped.readUnsignedShortLE();
+ }
+
+ @Override
+ public int readMedium() {
+ return wrapped.readMedium();
+ }
+
+ @Override
+ public int readMediumLE() {
+ return wrapped.readMediumLE();
+ }
+
+ @Override
+ public int readUnsignedMedium() {
+ return wrapped.readUnsignedMedium();
+ }
+
+ @Override
+ public int readUnsignedMediumLE() {
+ return wrapped.readUnsignedMediumLE();
+ }
+
+ @Override
+ public int readInt() {
+ return wrapped.readInt();
+ }
+
+ @Override
+ public int readIntLE() {
+ return wrapped.readIntLE();
+ }
+
+ @Override
+ public long readUnsignedInt() {
+ return wrapped.readUnsignedInt();
+ }
+
+ @Override
+ public long readUnsignedIntLE() {
+ return wrapped.readUnsignedIntLE();
+ }
+
+ @Override
+ public long readLong() {
+ return wrapped.readLong();
+ }
+
+ @Override
+ public long readLongLE() {
+ return wrapped.readLongLE();
+ }
+
+ @Override
+ public char readChar() {
+ return wrapped.readChar();
+ }
+
+ @Override
+ public float readFloat() {
+ return wrapped.readFloat();
+ }
+
+ @Override
+ public double readDouble() {
+ return wrapped.readDouble();
+ }
+
+ @Override
+ public ByteBuf readBytes(int length) {
+ return wrapped.readBytes(length);
+ }
+
+ @Override
+ public ByteBuf slice() {
+ return wrapped.slice();
+ }
+
+ @Override
+ public ByteBuf slice(int index, int length) {
+ return wrapped.slice(index, length);
+ }
+
+ @Override
+ public ByteBuffer nioBuffer() {
+ return wrapped.nioBuffer();
+ }
+
+ @Override
+ public String toString(Charset charset) {
+ return wrapped.toString(charset);
+ }
+
+ @Override
+ public String toString(int index, int length, Charset charset) {
+ return wrapped.toString(index, length, charset);
+ }
+
+ @Override
+ public int indexOf(int fromIndex, int toIndex, byte value) {
+ return wrapped.indexOf(fromIndex, toIndex, value);
+ }
+
+ @Override
+ public int bytesBefore(byte value) {
+ return wrapped.bytesBefore(value);
+ }
+
+ @Override
+ public int bytesBefore(int length, byte value) {
+ return wrapped.bytesBefore(length, value);
+ }
+
+ @Override
+ public int bytesBefore(int index, int length, byte value) {
+ return wrapped.bytesBefore(index, length, value);
+ }
+
+ @Override
+ public int forEachByte(ByteProcessor processor) {
+ return wrapped.forEachByte(processor);
+ }
+
+ @Override
+ public int forEachByte(int index, int length, ByteProcessor processor) {
+ return wrapped.forEachByte(index, length, processor);
+ }
+
+ @Override
+ public int forEachByteDesc(ByteProcessor processor) {
+ return wrapped.forEachByteDesc(processor);
+ }
+
+ @Override
+ public int forEachByteDesc(int index, int length, ByteProcessor processor) {
+ return wrapped.forEachByteDesc(index, length, processor);
+ }
+
+ @Override
+ public final int hashCode() {
+ return wrapped.hashCode();
+ }
+
+ @Override
+ public final boolean equals(Object o) {
+ return wrapped.equals(o);
+ }
+
+ @Override
+ public final int compareTo(ByteBuf that) {
+ return wrapped.compareTo(that);
+ }
+
+ @Override
+ public final int refCnt() {
+ return wrapped.refCnt();
+ }
+
+ @Override
+ public ByteBuf duplicate() {
+ return wrapped.duplicate();
+ }
+
+ @Override
+ public ByteBuf readSlice(int length) {
+ return wrapped.readSlice(length);
+ }
+
+ @Override
+ public int readBytes(GatheringByteChannel out, int length) throws IOException {
+ return wrapped.readBytes(out, length);
+ }
+
+ @Override
+ public ByteBuf writeShortLE(int value) {
+ return wrapped.writeShortLE(value);
+ }
+
+ @Override
+ public ByteBuf writeMediumLE(int value) {
+ return wrapped.writeMediumLE(value);
+ }
+
+ @Override
+ public ByteBuf writeIntLE(int value) {
+ return wrapped.writeIntLE(value);
+ }
+
+ @Override
+ public ByteBuf writeLongLE(long value) {
+ return wrapped.writeLongLE(value);
+ }
+
+ @Override
+ public int writeBytes(InputStream in, int length) throws IOException {
+ return wrapped.writeBytes(in, length);
+ }
+
+ @Override
+ public int writeBytes(ScatteringByteChannel in, int length) throws IOException {
+ return wrapped.writeBytes(in, length);
+ }
+
+ @Override
+ public ByteBuf copy() {
+ return wrapped.copy();
+ }
+
+ @Override
+ public CompositeByteBuf addComponent(ByteBuf buffer) {
+ wrapped.addComponent(buffer);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(ByteBuf... buffers) {
+ wrapped.addComponents(buffers);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(Iterable<ByteBuf> buffers) {
+ wrapped.addComponents(buffers);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf addComponent(int cIndex, ByteBuf buffer) {
+ wrapped.addComponent(cIndex, buffer);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(int cIndex, ByteBuf... buffers) {
+ wrapped.addComponents(cIndex, buffers);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf addComponents(int cIndex, Iterable<ByteBuf> buffers) {
+ wrapped.addComponents(cIndex, buffers);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf removeComponent(int cIndex) {
+ wrapped.removeComponent(cIndex);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf removeComponents(int cIndex, int numComponents) {
+ wrapped.removeComponents(cIndex, numComponents);
+ return this;
+ }
+
+ @Override
+ public Iterator<ByteBuf> iterator() {
+ return wrapped.iterator();
+ }
+
+ @Override
+ public List<ByteBuf> decompose(int offset, int length) {
+ return wrapped.decompose(offset, length);
+ }
+
+ @Override
+ public final boolean isDirect() {
+ return wrapped.isDirect();
+ }
+
+ @Override
+ public final boolean hasArray() {
+ return wrapped.hasArray();
+ }
+
+ @Override
+ public final byte[] array() {
+ return wrapped.array();
+ }
+
+ @Override
+ public final int arrayOffset() {
+ return wrapped.arrayOffset();
+ }
+
+ @Override
+ public final boolean hasMemoryAddress() {
+ return wrapped.hasMemoryAddress();
+ }
+
+ @Override
+ public final long memoryAddress() {
+ return wrapped.memoryAddress();
+ }
+
+ @Override
+ public final int capacity() {
+ return wrapped.capacity();
+ }
+
+ @Override
+ public CompositeByteBuf capacity(int newCapacity) {
+ wrapped.capacity(newCapacity);
+ return this;
+ }
+
+ @Override
+ public final ByteBufAllocator alloc() {
+ return wrapped.alloc();
+ }
+
+ @Override
+ public final ByteOrder order() {
+ return wrapped.order();
+ }
+
+ @Override
+ public final int numComponents() {
+ return wrapped.numComponents();
+ }
+
+ @Override
+ public final int maxNumComponents() {
+ return wrapped.maxNumComponents();
+ }
+
+ @Override
+ public final int toComponentIndex(int offset) {
+ return wrapped.toComponentIndex(offset);
+ }
+
+ @Override
+ public final int toByteIndex(int cIndex) {
+ return wrapped.toByteIndex(cIndex);
+ }
+
+ @Override
+ public byte getByte(int index) {
+ return wrapped.getByte(index);
+ }
+
+ @Override
+ protected final byte _getByte(int index) {
+ return wrapped._getByte(index);
+ }
+
+ @Override
+ protected final short _getShort(int index) {
+ return wrapped._getShort(index);
+ }
+
+ @Override
+ protected final short _getShortLE(int index) {
+ return wrapped._getShortLE(index);
+ }
+
+ @Override
+ protected final int _getUnsignedMedium(int index) {
+ return wrapped._getUnsignedMedium(index);
+ }
+
+ @Override
+ protected final int _getUnsignedMediumLE(int index) {
+ return wrapped._getUnsignedMediumLE(index);
+ }
+
+ @Override
+ protected final int _getInt(int index) {
+ return wrapped._getInt(index);
+ }
+
+ @Override
+ protected final int _getIntLE(int index) {
+ return wrapped._getIntLE(index);
+ }
+
+ @Override
+ protected final long _getLong(int index) {
+ return wrapped._getLong(index);
+ }
+
+ @Override
+ protected final long _getLongLE(int index) {
+ return wrapped._getLongLE(index);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
+ wrapped.getBytes(index, dst, dstIndex, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuffer dst) {
+ wrapped.getBytes(index, dst);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
+ wrapped.getBytes(index, dst, dstIndex, length);
+ return this;
+ }
+
+ @Override
+ public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
+ return wrapped.getBytes(index, out, length);
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
+ wrapped.getBytes(index, out, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setByte(int index, int value) {
+ wrapped.setByte(index, value);
+ return this;
+ }
+
+ @Override
+ protected final void _setByte(int index, int value) {
+ wrapped._setByte(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setShort(int index, int value) {
+ wrapped.setShort(index, value);
+ return this;
+ }
+
+ @Override
+ protected final void _setShort(int index, int value) {
+ wrapped._setShort(index, value);
+ }
+
+ @Override
+ protected final void _setShortLE(int index, int value) {
+ wrapped._setShortLE(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setMedium(int index, int value) {
+ wrapped.setMedium(index, value);
+ return this;
+ }
+
+ @Override
+ protected final void _setMedium(int index, int value) {
+ wrapped._setMedium(index, value);
+ }
+
+ @Override
+ protected final void _setMediumLE(int index, int value) {
+ wrapped._setMediumLE(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setInt(int index, int value) {
+ wrapped.setInt(index, value);
+ return this;
+ }
+
+ @Override
+ protected final void _setInt(int index, int value) {
+ wrapped._setInt(index, value);
+ }
+
+ @Override
+ protected final void _setIntLE(int index, int value) {
+ wrapped._setIntLE(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setLong(int index, long value) {
+ wrapped.setLong(index, value);
+ return this;
+ }
+
+ @Override
+ protected final void _setLong(int index, long value) {
+ wrapped._setLong(index, value);
+ }
+
+ @Override
+ protected final void _setLongLE(int index, long value) {
+ wrapped._setLongLE(index, value);
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
+ wrapped.setBytes(index, src, srcIndex, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuffer src) {
+ wrapped.setBytes(index, src);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
+ wrapped.setBytes(index, src, srcIndex, length);
+ return this;
+ }
+
+ @Override
+ public int setBytes(int index, InputStream in, int length) throws IOException {
+ return wrapped.setBytes(index, in, length);
+ }
+
+ @Override
+ public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
+ return wrapped.setBytes(index, in, length);
+ }
+
+ @Override
+ public ByteBuf copy(int index, int length) {
+ return wrapped.copy(index, length);
+ }
+
+ @Override
+ public final ByteBuf component(int cIndex) {
+ return wrapped.component(cIndex);
+ }
+
+ @Override
+ public final ByteBuf componentAtOffset(int offset) {
+ return wrapped.componentAtOffset(offset);
+ }
+
+ @Override
+ public final ByteBuf internalComponent(int cIndex) {
+ return wrapped.internalComponent(cIndex);
+ }
+
+ @Override
+ public final ByteBuf internalComponentAtOffset(int offset) {
+ return wrapped.internalComponentAtOffset(offset);
+ }
+
+ @Override
+ public int nioBufferCount() {
+ return wrapped.nioBufferCount();
+ }
+
+ @Override
+ public ByteBuffer internalNioBuffer(int index, int length) {
+ return wrapped.internalNioBuffer(index, length);
+ }
+
+ @Override
+ public ByteBuffer nioBuffer(int index, int length) {
+ return wrapped.nioBuffer(index, length);
+ }
+
+ @Override
+ public ByteBuffer[] nioBuffers(int index, int length) {
+ return wrapped.nioBuffers(index, length);
+ }
+
+ @Override
+ public CompositeByteBuf consolidate() {
+ wrapped.consolidate();
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf consolidate(int cIndex, int numComponents) {
+ wrapped.consolidate(cIndex, numComponents);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf discardReadComponents() {
+ wrapped.discardReadComponents();
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf discardReadBytes() {
+ wrapped.discardReadBytes();
+ return this;
+ }
+
+ @Override
+ public final String toString() {
+ return wrapped.toString();
+ }
+
+ @Override
+ public final CompositeByteBuf readerIndex(int readerIndex) {
+ wrapped.readerIndex(readerIndex);
+ return this;
+ }
+
+ @Override
+ public final CompositeByteBuf writerIndex(int writerIndex) {
+ wrapped.writerIndex(writerIndex);
+ return this;
+ }
+
+ @Override
+ public final CompositeByteBuf setIndex(int readerIndex, int writerIndex) {
+ wrapped.setIndex(readerIndex, writerIndex);
+ return this;
+ }
+
+ @Override
+ public final CompositeByteBuf clear() {
+ wrapped.clear();
+ return this;
+ }
+
+ @Override
+ public final CompositeByteBuf markReaderIndex() {
+ wrapped.markReaderIndex();
+ return this;
+ }
+
+ @Override
+ public final CompositeByteBuf resetReaderIndex() {
+ wrapped.resetReaderIndex();
+ return this;
+ }
+
+ @Override
+ public final CompositeByteBuf markWriterIndex() {
+ wrapped.markWriterIndex();
+ return this;
+ }
+
+ @Override
+ public final CompositeByteBuf resetWriterIndex() {
+ wrapped.resetWriterIndex();
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf ensureWritable(int minWritableBytes) {
+ wrapped.ensureWritable(minWritableBytes);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuf dst) {
+ wrapped.getBytes(index, dst);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, ByteBuf dst, int length) {
+ wrapped.getBytes(index, dst, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf getBytes(int index, byte[] dst) {
+ wrapped.getBytes(index, dst);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setBoolean(int index, boolean value) {
+ wrapped.setBoolean(index, value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setChar(int index, int value) {
+ wrapped.setChar(index, value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setFloat(int index, float value) {
+ wrapped.setFloat(index, value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setDouble(int index, double value) {
+ wrapped.setDouble(index, value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuf src) {
+ wrapped.setBytes(index, src);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, ByteBuf src, int length) {
+ wrapped.setBytes(index, src, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setBytes(int index, byte[] src) {
+ wrapped.setBytes(index, src);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf setZero(int index, int length) {
+ wrapped.setZero(index, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuf dst) {
+ wrapped.readBytes(dst);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuf dst, int length) {
+ wrapped.readBytes(dst, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuf dst, int dstIndex, int length) {
+ wrapped.readBytes(dst, dstIndex, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(byte[] dst) {
+ wrapped.readBytes(dst);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(byte[] dst, int dstIndex, int length) {
+ wrapped.readBytes(dst, dstIndex, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(ByteBuffer dst) {
+ wrapped.readBytes(dst);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf readBytes(OutputStream out, int length) throws IOException {
+ wrapped.readBytes(out, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf skipBytes(int length) {
+ wrapped.skipBytes(length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeBoolean(boolean value) {
+ wrapped.writeBoolean(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeByte(int value) {
+ wrapped.writeByte(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeShort(int value) {
+ wrapped.writeShort(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeMedium(int value) {
+ wrapped.writeMedium(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeInt(int value) {
+ wrapped.writeInt(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeLong(long value) {
+ wrapped.writeLong(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeChar(int value) {
+ wrapped.writeChar(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeFloat(float value) {
+ wrapped.writeFloat(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeDouble(double value) {
+ wrapped.writeDouble(value);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuf src) {
+ wrapped.writeBytes(src);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuf src, int length) {
+ wrapped.writeBytes(src, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuf src, int srcIndex, int length) {
+ wrapped.writeBytes(src, srcIndex, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(byte[] src) {
+ wrapped.writeBytes(src);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(byte[] src, int srcIndex, int length) {
+ wrapped.writeBytes(src, srcIndex, length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeBytes(ByteBuffer src) {
+ wrapped.writeBytes(src);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf writeZero(int length) {
+ wrapped.writeZero(length);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf retain(int increment) {
+ wrapped.retain(increment);
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf retain() {
+ wrapped.retain();
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf touch() {
+ wrapped.touch();
+ return this;
+ }
+
+ @Override
+ public CompositeByteBuf touch(Object hint) {
+ wrapped.touch(hint);
+ return this;
+ }
+
+ @Override
+ public ByteBuffer[] nioBuffers() {
+ return wrapped.nioBuffers();
+ }
+
+ @Override
+ public CompositeByteBuf discardSomeReadBytes() {
+ wrapped.discardSomeReadBytes();
+ return this;
+ }
+
+ @Override
+ public final void deallocate() {
+ wrapped.deallocate();
+ }
+
+ @Override
+ public final ByteBuf unwrap() {
+ return wrapped;
+ }
+}
| null | train | train | 2016-01-14T07:19:23 | 2015-07-22T01:39:45Z | Scottmitch | val |
netty/netty/4699_4731 | netty/netty | netty/netty/4699 | netty/netty/4731 | [
"timestamp(timedelta=66518.0, similarity=0.8475097645824541)"
] | ae4e9ddc2d66f23c3cad6f8adeed72392aae67da | f8db04adf4104b0c81993903c3db51071d20c7ec | [
"@lw346 could you please comment ? \n",
"I guess he meant any number `&` with `0x3F` (63) must be less than 64.\n",
"Fixed in #4731\n"
] | [
"Removed `private` to call it in test\n",
"Removed `private` to call it in test\n",
"`decoded` and `Unpooled.wrappedBuffer(...)` need to be released.\n",
"+1\n",
"Just call ByteBuf.release() ?\n",
"just call `in.release()`?\n",
"@normanmaurer - +1 :wink: \n"
] | 2016-01-19T23:23:21Z | [] | Snappy.java's decodeLiteral method implemention | ``` java
private static int decodeLiteral(byte tag, ByteBuf in, ByteBuf out) {
in.markReaderIndex();
int length;
switch(tag >> 2 & 0x3F) {
case 60:
if (!in.isReadable()) {
return NOT_ENOUGH_INPUT;
}
length = in.readUnsignedByte();
break;
case 61:
if (in.readableBytes() < 2) {
return NOT_ENOUGH_INPUT;
}
length = ByteBufUtil.swapShort(in.readShort());
break;
case 62:
if (in.readableBytes() < 3) {
return NOT_ENOUGH_INPUT;
}
length = ByteBufUtil.swapMedium(in.readUnsignedMedium());
break;
case 64:
if (in.readableBytes() < 4) {
return NOT_ENOUGH_INPUT;
}
length = ByteBufUtil.swapInt(in.readInt());
break;
default:
length = tag >> 2 & 0x3F;
}
length += 1;
if (in.readableBytes() < length) {
in.resetReaderIndex();
return NOT_ENOUGH_INPUT;
}
out.writeBytes(in, length);
return length;
}
```
I was wondering to know why there's a 'case 64' here. Should 'tag >> 2 & 0x3F' never will be equal to '64'?
| [
"codec/src/main/java/io/netty/handler/codec/compression/Snappy.java"
] | [
"codec/src/main/java/io/netty/handler/codec/compression/Snappy.java"
] | [
"codec/src/test/java/io/netty/handler/codec/compression/SnappyTest.java"
] | diff --git a/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java b/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java
index 39ec873dc8d..df4e88a0821 100644
--- a/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java
+++ b/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java
@@ -228,7 +228,7 @@ private static int bitsToEncode(int value) {
* @param out The output buffer to copy to
* @param length The length of the literal to copy
*/
- private static void encodeLiteral(ByteBuf in, ByteBuf out, int length) {
+ static void encodeLiteral(ByteBuf in, ByteBuf out, int length) {
if (length < 61) {
out.writeByte(length - 1 << 2);
} else {
@@ -395,7 +395,7 @@ private static int readPreamble(ByteBuf in) {
* @param out The output buffer to write the literal to
* @return The number of bytes appended to the output buffer, or -1 to indicate "try again later"
*/
- private static int decodeLiteral(byte tag, ByteBuf in, ByteBuf out) {
+ static int decodeLiteral(byte tag, ByteBuf in, ByteBuf out) {
in.markReaderIndex();
int length;
switch(tag >> 2 & 0x3F) {
@@ -417,7 +417,7 @@ private static int decodeLiteral(byte tag, ByteBuf in, ByteBuf out) {
}
length = in.readUnsignedMediumLE();
break;
- case 64:
+ case 63:
if (in.readableBytes() < 4) {
return NOT_ENOUGH_INPUT;
}
| diff --git a/codec/src/test/java/io/netty/handler/codec/compression/SnappyTest.java b/codec/src/test/java/io/netty/handler/codec/compression/SnappyTest.java
index 0ac3d6574c9..84b5e068362 100644
--- a/codec/src/test/java/io/netty/handler/codec/compression/SnappyTest.java
+++ b/codec/src/test/java/io/netty/handler/codec/compression/SnappyTest.java
@@ -195,4 +195,32 @@ public void testValidateChecksumFails() {
validateChecksum(maskChecksum(0xd6cb8b55), input);
}
+
+ @Test
+ public void testEncodeLiteralAndDecodeLiteral() {
+ int[] lengths = new int[] {
+ 0x11, // default
+ 0x100, // case 60
+ 0x1000, // case 61
+ 0x100000, // case 62
+ 0x1000001 // case 63
+ };
+ for (int len : lengths) {
+ ByteBuf in = Unpooled.wrappedBuffer(new byte[len]);
+ ByteBuf encoded = Unpooled.buffer(10);
+ ByteBuf decoded = Unpooled.buffer(10);
+ ByteBuf expected = Unpooled.wrappedBuffer(new byte[len]);
+ try {
+ Snappy.encodeLiteral(in, encoded, len);
+ byte tag = encoded.readByte();
+ Snappy.decodeLiteral(tag, encoded, decoded);
+ assertEquals("Encoded or decoded literal was incorrect", expected, decoded);
+ } finally {
+ in.release();
+ encoded.release();
+ decoded.release();
+ expected.release();
+ }
+ }
+ }
}
| val | train | 2016-01-20T19:56:24 | 2016-01-13T08:16:33Z | johnsonma | val |
netty/netty/4738_4742 | netty/netty | netty/netty/4738 | netty/netty/4742 | [
"timestamp(timedelta=12.0, similarity=0.9466880115333267)"
] | d1ef33b8f419ac8ae591e864df2cdd24b36ef765 | 7e1387333eaa45ed15c0ba5e01ced0c4d6de18f1 | [
"@normanmaurer @trustin - Any objections?\n",
"@ejona86 works for me. Please open a pr\n"
] | [] | 2016-01-21T21:43:10Z | [
"feature"
] | Add ChannelHandlerContext.invoker() to 4.1 | The now-defunct 5.0 had a [ChannelHandlerContext.invoker()](https://github.com/netty/netty/blob/master_deprecated/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java#L148) method. I found it to be useful, and the method is already implemented; it just needs to be exposed on the interface. It's my understanding of Netty API stability policy that new methods can be added to interfaces between minor versions (thus, 4.1, since it isn't final _yet_). If 4.1 is too close to release to add the API, I understand.
I will grant that needing to use the invoker is rare and quite advanced, but in some ways it isn't that strange. When writing Http2MultiplexCodec I used the invoker because: 1) I needed to call a particular handler 2) from possibly a different thread 3) while ideally not having to deal with exceptions. Parts 2 and 3 are well-served by ChannelHandlerContext, but 1 isn't because it will call the _next_ or _previous_ handler and using the previous or next context (respectively) to compensate is unstable as the pipeline is modified.
It is possible to write your own invoker on top of `ctx.executor()`, but that seems like needless boilerplate and cause for trouble if someone has actually specified their own ChannelHandlerInvoker for the channel.
Some additional context: https://github.com/netty/netty/pull/4503#discussion_r45705972
| [
"microbench/src/main/java/io/netty/microbench/channel/EmbeddedChannelWriteReleaseHandlerContext.java",
"transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java",
"transport/src/main/java/io/netty/channel/ChannelHandlerContext.java",
"transport/src/main/java/io/netty/channel/CombinedChannel... | [
"microbench/src/main/java/io/netty/microbench/channel/EmbeddedChannelWriteReleaseHandlerContext.java",
"transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java",
"transport/src/main/java/io/netty/channel/ChannelHandlerContext.java",
"transport/src/main/java/io/netty/channel/CombinedChannel... | [] | diff --git a/microbench/src/main/java/io/netty/microbench/channel/EmbeddedChannelWriteReleaseHandlerContext.java b/microbench/src/main/java/io/netty/microbench/channel/EmbeddedChannelWriteReleaseHandlerContext.java
index b8984ea488d..9a74f98d97d 100644
--- a/microbench/src/main/java/io/netty/microbench/channel/EmbeddedChannelWriteReleaseHandlerContext.java
+++ b/microbench/src/main/java/io/netty/microbench/channel/EmbeddedChannelWriteReleaseHandlerContext.java
@@ -20,9 +20,11 @@
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelHandlerInvoker;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.ChannelProgressivePromise;
import io.netty.channel.ChannelPromise;
+import io.netty.channel.EventLoop;
import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.util.Attribute;
import io.netty.util.AttributeKey;
@@ -33,7 +35,7 @@
public abstract class EmbeddedChannelWriteReleaseHandlerContext implements ChannelHandlerContext {
private static final String HANDLER_NAME = "microbench-delegator-ctx";
- private final EventExecutor executor;
+ private final EventLoop eventLoop;
private final Channel channel;
private final ByteBufAllocator alloc;
private final ChannelHandler handler;
@@ -48,7 +50,7 @@ public EmbeddedChannelWriteReleaseHandlerContext(ByteBufAllocator alloc, Channel
this.alloc = checkNotNull(alloc, "alloc");
this.channel = checkNotNull(channel, "channel");
this.handler = checkNotNull(handler, "handler");
- this.executor = checkNotNull(channel.eventLoop(), "executor");
+ this.eventLoop = checkNotNull(channel.eventLoop(), "eventLoop");
}
protected abstract void handleException(Throwable t);
@@ -70,7 +72,12 @@ public Channel channel() {
@Override
public EventExecutor executor() {
- return executor;
+ return eventLoop;
+ }
+
+ @Override
+ public ChannelHandlerInvoker invoker() {
+ return eventLoop.asInvoker();
}
@Override
diff --git a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java
index f9e7b80286d..f3b4b12c14e 100644
--- a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java
+++ b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java
@@ -82,6 +82,7 @@ public EventExecutor executor() {
return invoker().executor();
}
+ @Override
public ChannelHandlerInvoker invoker() {
if (invoker == null) {
return channel().unsafe().invoker();
diff --git a/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java
index 7b8e7c4e339..0fdb8476ad7 100644
--- a/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java
+++ b/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java
@@ -136,6 +136,14 @@ public interface ChannelHandlerContext extends AttributeMap {
*/
EventExecutor executor();
+ /**
+ * Returns the {@link ChannelHandlerInvoker} which is used to trigger an event for the associated
+ * {@link ChannelHandler}. Note that the methods in {@link ChannelHandlerInvoker} are not intended to be called
+ * by a user. Use this method only to obtain the reference to the {@link ChannelHandlerInvoker}
+ * (and not calling its methods) unless you know what you are doing.
+ */
+ ChannelHandlerInvoker invoker();
+
/**
* The unique name of the {@link ChannelHandlerContext}.The name was used when then {@link ChannelHandler}
* was added to the {@link ChannelPipeline}. This name can also be used to access the registered
diff --git a/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java b/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java
index 4abc4884687..bf6d266b1d2 100644
--- a/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java
+++ b/transport/src/main/java/io/netty/channel/CombinedChannelDuplexHandler.java
@@ -372,6 +372,11 @@ public EventExecutor executor() {
return ctx.executor();
}
+ @Override
+ public ChannelHandlerInvoker invoker() {
+ return ctx.invoker();
+ }
+
@Override
public String name() {
return ctx.name();
| null | test | train | 2016-01-21T09:59:10 | 2016-01-21T00:47:22Z | ejona86 | val |
netty/netty/4752_4753 | netty/netty | netty/netty/4752 | netty/netty/4753 | [
"timestamp(timedelta=46.0, similarity=0.8595048383618223)"
] | 1c417e5f8264271f785c18972a55d9e216d1b20b | 5c53230996b200bf2494b3f9e8f0b2f4a46c3ca4 | [] | [] | 2016-01-26T01:57:30Z | [] | Javadoc Fail: ChannelOutboundHandlerAdapter - deregister (4.1) | The description of the method `deregister` in the class ChannelOutboundHandlerAdapter says:
> Calls **ChannelHandlerContext.close(ChannelPromise)** to forward to the next ChannelOutboundHandler in the ChannelPipeline. Sub-classes may override this method to change behavior.
Should it not be the following?
> Calls **ChannelHandlerContext.deregister(ChannelPromise)** to forward to the next ChannelOutboundHandler in the ChannelPipeline. Sub-classes may override this method to change behavior.
Greetings from Germany,
mike
| [
"transport/src/main/java/io/netty/channel/ChannelOutboundHandlerAdapter.java"
] | [
"transport/src/main/java/io/netty/channel/ChannelOutboundHandlerAdapter.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/ChannelOutboundHandlerAdapter.java b/transport/src/main/java/io/netty/channel/ChannelOutboundHandlerAdapter.java
index c0a68971237..fa968925de9 100644
--- a/transport/src/main/java/io/netty/channel/ChannelOutboundHandlerAdapter.java
+++ b/transport/src/main/java/io/netty/channel/ChannelOutboundHandlerAdapter.java
@@ -18,7 +18,7 @@
import java.net.SocketAddress;
/**
- * Skelton implementation of a {@link ChannelOutboundHandler}. This implementation just forwards each method call via
+ * Skeleton implementation of a {@link ChannelOutboundHandler}. This implementation just forwards each method call via
* the {@link ChannelHandlerContext}.
*/
public class ChannelOutboundHandlerAdapter extends ChannelHandlerAdapter implements ChannelOutboundHandler {
@@ -72,7 +72,7 @@ public void close(ChannelHandlerContext ctx, ChannelPromise promise)
}
/**
- * Calls {@link ChannelHandlerContext#close(ChannelPromise)} to forward
+ * Calls {@link ChannelHandlerContext#deregister(ChannelPromise)} to forward
* to the next {@link ChannelOutboundHandler} in the {@link ChannelPipeline}.
*
* Sub-classes may override this method to change behavior.
@@ -94,7 +94,7 @@ public void read(ChannelHandlerContext ctx) throws Exception {
}
/**
- * Calls {@link ChannelHandlerContext#write(Object)} to forward
+ * Calls {@link ChannelHandlerContext#write(Object, ChannelPromise)} to forward
* to the next {@link ChannelOutboundHandler} in the {@link ChannelPipeline}.
*
* Sub-classes may override this method to change behavior.
| null | train | train | 2016-01-21T15:35:55 | 2016-01-25T22:59:24Z | mickare | val |
netty/netty/4760_4761 | netty/netty | netty/netty/4760 | netty/netty/4761 | [
"timestamp(timedelta=87.0, similarity=0.9265861370024806)"
] | c3e5604f59e0c6354dd6458fb62e5b83edadbda2 | 260c2cd3f2c1678e6a8b2b23556a2fb284ab3347 | [
"/cc @normanmaurer @nmittler @trustin \n",
"@trustin ping again\n"
] | [
"unused import ?\n",
"ReferenceCountUtil.safeRelease(...) ?\n",
"see above\n",
"I was just anticipating your next comments :wink: \n",
"I originally had this, but figured we could avoid the instanceof check and casting by the simple `try/catch` block.\n"
] | 2016-01-27T02:23:30Z | [
"defect"
] | CompositeByteBuf.addComponent ownership clarification | If all goes well `CompositeByteBuf.addComponent` will take "ownership" (will call `release()`) of the parameter. However there are cases where this is not the case. For example if the `CompositeByteBuf` has been released `addComponent` may throw an `IllegalReferenceCountException` and not release the parameter. At the very least the interface should be clarified so that the caller can understand expectations. However I would like see if there are any objections to `CompositeByteBuf.addComponent` always assuming "ownership" of the parameter. This would be analogous to a `Channel.write` always assuming responsibility of the parameter even if the underlying transport cannot actually support the write operation.
| [
"buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java"
] | [
"buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
index 262b229c327..be2ce7917f5 100644
--- a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
@@ -33,6 +33,8 @@
import java.util.ListIterator;
import java.util.NoSuchElementException;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
/**
* A virtual buffer which shows multiple buffers as a single merged buffer. It is recommended to use
* {@link ByteBufAllocator#compositeBuffer()} or {@link Unpooled#wrappedBuffer(ByteBuf...)} instead of calling the
@@ -117,13 +119,16 @@ private static List<Component> newList(int maxNumComponents) {
/**
* Add the given {@link ByteBuf}.
- *
+ * <p>
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
- *
- * @param buffer the {@link ByteBuf} to add
+ * <p>
+ * {@link ByteBuf#release()} ownership of {@code buffer} is transfered to this {@link CompositeByteBuf}.
+ * @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transfered to this
+ * {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponent(ByteBuf buffer) {
+ checkNotNull(buffer, "buffer");
addComponent0(components.size(), buffer);
consolidateIfNeeded();
return this;
@@ -131,11 +136,14 @@ public CompositeByteBuf addComponent(ByteBuf buffer) {
/**
* Add the given {@link ByteBuf}s.
- *
+ * <p>
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
- *
- * @param buffers the {@link ByteBuf}s to add
+ * <p>
+ * {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
+ * {@link CompositeByteBuf}.
+ * @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
+ * ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(ByteBuf... buffers) {
addComponents0(components.size(), buffers);
@@ -145,11 +153,14 @@ public CompositeByteBuf addComponents(ByteBuf... buffers) {
/**
* Add the given {@link ByteBuf}s.
- *
+ * <p>
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
- *
- * @param buffers the {@link ByteBuf}s to add
+ * <p>
+ * {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
+ * {@link CompositeByteBuf}.
+ * @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
+ * ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(Iterable<ByteBuf> buffers) {
addComponents0(components.size(), buffers);
@@ -159,56 +170,73 @@ public CompositeByteBuf addComponents(Iterable<ByteBuf> buffers) {
/**
* Add the given {@link ByteBuf} on the specific index.
- *
+ * <p>
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
- *
- * @param cIndex the index on which the {@link ByteBuf} will be added
- * @param buffer the {@link ByteBuf} to add
+ * <p>
+ * {@link ByteBuf#release()} ownership of {@code buffer} is transfered to this {@link CompositeByteBuf}.
+ * @param cIndex the index on which the {@link ByteBuf} will be added.
+ * @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transfered to this
+ * {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponent(int cIndex, ByteBuf buffer) {
+ checkNotNull(buffer, "buffer");
addComponent0(cIndex, buffer);
consolidateIfNeeded();
return this;
}
+ /**
+ * Precondition is that {@code buffer != null}.
+ */
private int addComponent0(int cIndex, ByteBuf buffer) {
- checkComponentIndex(cIndex);
-
- if (buffer == null) {
- throw new NullPointerException("buffer");
- }
+ assert buffer != null;
+ boolean wasAdded = false;
+ try {
+ checkComponentIndex(cIndex);
- int readableBytes = buffer.readableBytes();
+ int readableBytes = buffer.readableBytes();
- // No need to consolidate - just add a component to the list.
- Component c = new Component(buffer.order(ByteOrder.BIG_ENDIAN).slice());
- if (cIndex == components.size()) {
- components.add(c);
- if (cIndex == 0) {
- c.endOffset = readableBytes;
+ // No need to consolidate - just add a component to the list.
+ @SuppressWarnings("deprecation")
+ Component c = new Component(buffer.order(ByteOrder.BIG_ENDIAN).slice());
+ if (cIndex == components.size()) {
+ wasAdded = components.add(c);
+ if (cIndex == 0) {
+ c.endOffset = readableBytes;
+ } else {
+ Component prev = components.get(cIndex - 1);
+ c.offset = prev.endOffset;
+ c.endOffset = c.offset + readableBytes;
+ }
} else {
- Component prev = components.get(cIndex - 1);
- c.offset = prev.endOffset;
- c.endOffset = c.offset + readableBytes;
+ components.add(cIndex, c);
+ wasAdded = true;
+ if (readableBytes != 0) {
+ updateComponentOffsets(cIndex);
+ }
}
- } else {
- components.add(cIndex, c);
- if (readableBytes != 0) {
- updateComponentOffsets(cIndex);
+ return cIndex;
+ } finally {
+ if (!wasAdded) {
+ buffer.release();
}
}
- return cIndex;
}
/**
* Add the given {@link ByteBuf}s on the specific index
- *
+ * <p>
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
- *
- * @param cIndex the index on which the {@link ByteBuf} will be added.
- * @param buffers the {@link ByteBuf}s to add
+ * <p>
+ * {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
+ * {@link CompositeByteBuf}.
+ * @param cIndex the index on which the {@link ByteBuf} will be added. {@link ByteBuf#release()} ownership of all
+ * {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects is transfered to this
+ * {@link CompositeByteBuf}.
+ * @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
+ * ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(int cIndex, ByteBuf... buffers) {
addComponents0(cIndex, buffers);
@@ -217,24 +245,38 @@ public CompositeByteBuf addComponents(int cIndex, ByteBuf... buffers) {
}
private int addComponents0(int cIndex, ByteBuf... buffers) {
- checkComponentIndex(cIndex);
-
- if (buffers == null) {
- throw new NullPointerException("buffers");
- }
-
- // No need for consolidation
- for (ByteBuf b: buffers) {
- if (b == null) {
- break;
+ checkNotNull(buffers, "buffers");
+ int i = 0;
+ try {
+ checkComponentIndex(cIndex);
+
+ // No need for consolidation
+ while (i < buffers.length) {
+ // Increment i now to prepare for the next iteration and prevent a duplicate release (addComponent0
+ // will release if an exception occurs, and we also release in the finally block here).
+ ByteBuf b = buffers[i++];
+ if (b == null) {
+ break;
+ }
+ cIndex = addComponent0(cIndex, b) + 1;
+ int size = components.size();
+ if (cIndex > size) {
+ cIndex = size;
+ }
}
- cIndex = addComponent0(cIndex, b) + 1;
- int size = components.size();
- if (cIndex > size) {
- cIndex = size;
+ return cIndex;
+ } finally {
+ for (; i < buffers.length; ++i) {
+ ByteBuf b = buffers[i];
+ if (b != null) {
+ try {
+ b.release();
+ } catch (Throwable ignored) {
+ // ignore
+ }
+ }
}
}
- return cIndex;
}
/**
@@ -242,9 +284,13 @@ private int addComponents0(int cIndex, ByteBuf... buffers) {
*
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
- *
+ * <p>
+ * {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
+ * {@link CompositeByteBuf}.
* @param cIndex the index on which the {@link ByteBuf} will be added.
- * @param buffers the {@link ByteBuf}s to add
+ * @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all
+ * {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects is transfered to this
+ * {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(int cIndex, Iterable<ByteBuf> buffers) {
addComponents0(cIndex, buffers);
@@ -253,21 +299,32 @@ public CompositeByteBuf addComponents(int cIndex, Iterable<ByteBuf> buffers) {
}
private int addComponents0(int cIndex, Iterable<ByteBuf> buffers) {
- if (buffers == null) {
- throw new NullPointerException("buffers");
- }
-
if (buffers instanceof ByteBuf) {
// If buffers also implements ByteBuf (e.g. CompositeByteBuf), it has to go to addComponent(ByteBuf).
return addComponent0(cIndex, (ByteBuf) buffers);
}
+ checkNotNull(buffers, "buffers");
if (!(buffers instanceof Collection)) {
List<ByteBuf> list = new ArrayList<ByteBuf>();
- for (ByteBuf b: buffers) {
- list.add(b);
+ try {
+ for (ByteBuf b: buffers) {
+ list.add(b);
+ }
+ buffers = list;
+ } finally {
+ if (buffers != list) {
+ for (ByteBuf b: buffers) {
+ if (b != null) {
+ try {
+ b.release();
+ } catch (Throwable ignored) {
+ // ignore
+ }
+ }
+ }
+ }
}
- buffers = list;
}
Collection<ByteBuf> col = (Collection<ByteBuf>) buffers;
@@ -1457,10 +1514,7 @@ public CompositeByteBuf discardReadBytes() {
}
private ByteBuf allocBuffer(int capacity) {
- if (direct) {
- return alloc().directBuffer(capacity);
- }
- return alloc().heapBuffer(capacity);
+ return direct ? alloc().directBuffer(capacity) : alloc().heapBuffer(capacity);
}
@Override
@@ -1482,7 +1536,6 @@ private static final class Component {
}
void freeIfNecessary() {
- // Unwrap so that we can free slices, too.
buf.release(); // We should not get a NPE here. If so, it must be a bug.
}
}
| null | train | train | 2016-01-29T08:00:46 | 2016-01-26T19:36:50Z | Scottmitch | val |
netty/netty/4573_4763 | netty/netty | netty/netty/4573 | netty/netty/4763 | [
"timestamp(timedelta=7.0, similarity=0.8969352858274092)"
] | ca305d86fb45a40ad03a2e7270d58da65eb4aa06 | 96a4cfe760f2d47958beeb1e16a6b8de096ca16c | [
"@louiscryan - makes sense.\n",
"@louiscryan - Any updates on this? It would be nice to get this in before the next release. I can take over if you don't have cycles.\n",
"@louiscryan - I'll take this over for now. We can improve tests later.\n",
"Sorry Scott. Let me dig up my PR tomorrow and at least send th... | [
"Why wouldn't we just increment by frame.size()? Is the padding not merged?\n",
"The padding is currently not necessarily additive when a merge occurs. It is currently taken as `max`.\n\nSee https://github.com/netty/netty/issues/4573 for details too.\n",
"@nmittler - PTAL. This is to address https://github.com... | 2016-01-27T02:25:35Z | [
"defect"
] | HTTP2 padding merging breaks flow control | @Scottmitch
@nmittler
In DefaultHttp2Controller.enqueueFrame we increment the pendingBytes based on the size of the frame passed regardless of whether it was merged. Since merging uses padding = max(padding, new.padding) the number of bytes added to the merged frame may be less than newFrame.size().
This eventually leads to the flow controller allocating bytes to write when there are no frames which causes an assertion.
As a general rule it would make sense for the flow-controller unit tests to always use padding.
Working on fix for the immediate issue. Found this while working on stress test to find memory leak
```
Caused by: io.netty.handler.codec.http2.Http2Exception: byte distribution write error
at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:96) ~[classes/:na]
at io.netty.handler.codec.http2.PriorityStreamByteDistributor$WriteVisitor.visit(PriorityStreamByteDistributor.java:416) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.forEachActiveStream(DefaultHttp2Connection.java:1134) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2Connection.forEachActiveStream(DefaultHttp2Connection.java:135) ~[classes/:na]
at io.netty.handler.codec.http2.PriorityStreamByteDistributor$WriteVisitor.writeAllocatedBytes(PriorityStreamByteDistributor.java:396) ~[classes/:na]
at io.netty.handler.codec.http2.PriorityStreamByteDistributor.distribute(PriorityStreamByteDistributor.java:93) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$WritabilityMonitor.writePendingBytes(DefaultHttp2RemoteFlowController.java:743) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$DefaultWritabilityMonitor.writePendingBytes(DefaultHttp2RemoteFlowController.java:792) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController.writePendingBytes(DefaultHttp2RemoteFlowController.java:275) ~[classes/:na]
at io.netty.handler.codec.http2.Http2ConnectionHandler.flush(Http2ConnectionHandler.java:349) ~[classes/:na]
at io.netty.handler.codec.http2.FindTheLeakTest$1.run(FindTheLeakTest.java:103) ~[test-classes/:na]
at io.netty.handler.codec.http2.Http2TestUtil$1.run(Http2TestUtil.java:46) ~[test-classes/:na]
... 4 common frames omitted
Caused by: java.lang.AssertionError: null
at io.netty.handler.codec.http2.PriorityStreamByteDistributor$PriorityState.updateStreamableBytes(PriorityStreamByteDistributor.java:349) ~[classes/:na]
at io.netty.handler.codec.http2.PriorityStreamByteDistributor.updateStreamableBytes(PriorityStreamByteDistributor.java:81) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$DefaultState.incrementStreamWindow(DefaultHttp2RemoteFlowController.java:399) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$DefaultState.decrementFlowControlWindow(DefaultHttp2RemoteFlowController.java:505) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$DefaultState.writeAllocatedBytes(DefaultHttp2RemoteFlowController.java:380) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$DefaultWritabilityMonitor$1.write(DefaultHttp2RemoteFlowController.java:786) ~[classes/:na]
at io.netty.handler.codec.http2.PriorityStreamByteDistributor$WriteVisitor.visit(PriorityStreamByteDistributor.java:412) ~[classes/:na]
... 14 common frames omitted
18:51:59.807 [defaultEventLoopGroup-3-1] WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.AssertionError: null
at io.netty.handler.codec.http2.PriorityStreamByteDistributor$PriorityState.updateStreamableBytes(PriorityStreamByteDistributor.java:349) ~[classes/:na]
at io.netty.handler.codec.http2.PriorityStreamByteDistributor.updateStreamableBytes(PriorityStreamByteDistributor.java:81) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$DefaultState.cancel(DefaultHttp2RemoteFlowController.java:476) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$DefaultState.cancel(DefaultHttp2RemoteFlowController.java:454) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$1.onStreamClosed(DefaultHttp2RemoteFlowController.java:111) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2Connection.notifyClosed(DefaultHttp2Connection.java:263) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.removeFromActiveStreams(DefaultHttp2Connection.java:1177) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams$2.process(DefaultHttp2Connection.java:1124) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.forEachActiveStream(DefaultHttp2Connection.java:1148) ~[classes/:na]
at io.netty.handler.codec.http2.DefaultHttp2Connection.forEachActiveStream(DefaultHttp2Connection.java:135) ~[classes/:na]
at io.netty.handler.codec.http2.Http2ConnectionHandler$BaseDecoder.channelInactive(Http2ConnectionHandler.java:371) ~[classes/:na]
at io.netty.handler.codec.http2.Http2ConnectionHandler.channelInactive(Http2ConnectionHandler.java:569) ~[classes/:na]
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelInactiveNow(ChannelHandlerInvokerUtil.java:56) ~[classes/:na]
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelInactive(DefaultChannelHandlerInvoker.java:102) [classes/:na]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:132) [classes/:na]
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:910) [classes/:na]
at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:678) [classes/:na]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:310) [classes/:na]
at io.netty.util.concurrent.SingleThreadEventExecutor.confirmShutdown(SingleThreadEventExecutor.java:609) [classes/:na]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:765) [classes/:na]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [classes/:na]
at java.lang.Thread.run(Thread.java:745)
```
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index f3435061360..8c5718a3713 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -357,10 +357,11 @@ public void write(ChannelHandlerContext ctx, int allowedBytes) {
@Override
public boolean merge(ChannelHandlerContext ctx, Http2RemoteFlowController.FlowControlled next) {
- if (FlowControlledData.class != next.getClass()) {
+ FlowControlledData nextData;
+ if (FlowControlledData.class != next.getClass() ||
+ Integer.MAX_VALUE - (nextData = (FlowControlledData) next).size() < size()) {
return false;
}
- FlowControlledData nextData = (FlowControlledData) next;
nextData.queue.copyTo(queue);
// Given that we're merging data into a frame it doesn't really make sense to accumulate padding.
padding = Math.max(padding, nextData.padding);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index c99f86d6278..0acedebdcfa 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -404,9 +404,21 @@ public int pendingBytes() {
@Override
void enqueueFrame(FlowControlled frame) {
FlowControlled last = pendingWriteQueue.peekLast();
- if (last == null || !last.merge(ctx, frame)) {
- pendingWriteQueue.offer(frame);
+ if (last == null) {
+ enqueueFrameWithoutMerge(frame);
+ return;
+ }
+
+ int lastSize = last.size();
+ if (last.merge(ctx, frame)) {
+ incrementPendingBytes(last.size() - lastSize, true);
+ return;
}
+ enqueueFrameWithoutMerge(frame);
+ }
+
+ private void enqueueFrameWithoutMerge(FlowControlled frame) {
+ pendingWriteQueue.offer(frame);
// This must be called after adding to the queue in order so that hasFrame() is
// updated before updating the stream state.
incrementPendingBytes(frame.size(), true);
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
index 58742311dd2..d7f26f58765 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
@@ -212,6 +212,23 @@ public void payloadsShouldMerge() throws Http2Exception {
assertFalse(controller.isWritable(stream(STREAM_A)));
}
+ @Test
+ public void flowControllerCorrectlyAccountsForBytesWithMerge() throws Http2Exception {
+ controller.initialWindowSize(112); // This must be more than the total merged frame size 110
+ FakeFlowControlled data1 = new FakeFlowControlled(5, 2, true);
+ FakeFlowControlled data2 = new FakeFlowControlled(5, 100, true);
+ sendData(STREAM_A, data1);
+ sendData(STREAM_A, data2);
+ data1.assertNotWritten();
+ data1.assertNotWritten();
+ data2.assertMerged();
+ controller.writePendingBytes();
+ data1.assertFullyWritten();
+ data2.assertNotWritten();
+ verify(listener, never()).writabilityChanged(stream(STREAM_A));
+ assertTrue(controller.isWritable(stream(STREAM_A)));
+ }
+
@Test
public void stalledStreamShouldQueuePayloads() throws Http2Exception {
controller.initialWindowSize(0);
@@ -953,9 +970,10 @@ private void setChannelWritability(boolean isWritable) throws Http2Exception {
}
private static final class FakeFlowControlled implements Http2RemoteFlowController.FlowControlled {
-
- private int currentSize;
- private int originalSize;
+ private int currentPadding;
+ private int currentPayloadSize;
+ private int originalPayloadSize;
+ private int originalPadding;
private boolean writeCalled;
private final boolean mergeable;
private boolean merged;
@@ -963,20 +981,26 @@ private static final class FakeFlowControlled implements Http2RemoteFlowControll
private Throwable t;
private FakeFlowControlled(int size) {
- this.currentSize = size;
- this.originalSize = size;
- this.mergeable = false;
+ this(size, false);
}
private FakeFlowControlled(int size, boolean mergeable) {
- this.currentSize = size;
- this.originalSize = size;
+ this(size, 0, mergeable);
+ }
+
+ private FakeFlowControlled(int payloadSize, int padding, boolean mergeable) {
+ currentPayloadSize = originalPayloadSize = payloadSize;
+ currentPadding = originalPadding = padding;
this.mergeable = mergeable;
}
@Override
public int size() {
- return currentSize;
+ return currentPayloadSize + currentPadding;
+ }
+
+ private int originalSize() {
+ return originalPayloadSize + originalPadding;
}
@Override
@@ -990,28 +1014,36 @@ public void writeComplete() {
@Override
public void write(ChannelHandlerContext ctx, int allowedBytes) {
- if (allowedBytes <= 0 && currentSize != 0) {
+ if (allowedBytes <= 0 && size() != 0) {
// Write has been called but no data can be written
return;
}
writeCalled = true;
- int written = Math.min(currentSize, allowedBytes);
- currentSize -= written;
+ int written = Math.min(size(), allowedBytes);
+ if (written > currentPayloadSize) {
+ written -= currentPayloadSize;
+ currentPayloadSize = 0;
+ currentPadding -= written;
+ } else {
+ currentPayloadSize -= written;
+ }
}
@Override
public boolean merge(ChannelHandlerContext ctx, Http2RemoteFlowController.FlowControlled next) {
if (mergeable && next instanceof FakeFlowControlled) {
- this.originalSize += ((FakeFlowControlled) next).originalSize;
- this.currentSize += ((FakeFlowControlled) next).originalSize;
- ((FakeFlowControlled) next).merged = true;
+ FakeFlowControlled ffcNext = (FakeFlowControlled) next;
+ originalPayloadSize += ffcNext.originalPayloadSize;
+ currentPayloadSize += ffcNext.originalPayloadSize;
+ currentPadding = originalPadding = Math.max(originalPadding, ffcNext.originalPadding);
+ ffcNext.merged = true;
return true;
}
return false;
}
public int written() {
- return originalSize - currentSize;
+ return originalSize() - size();
}
public void assertNotWritten() {
@@ -1029,7 +1061,8 @@ public void assertPartiallyWritten(int expectedWritten, int delta) {
public void assertFullyWritten() {
assertTrue(writeCalled);
- assertEquals(0, currentSize);
+ assertEquals(0, currentPayloadSize);
+ assertEquals(0, currentPadding);
}
public boolean assertMerged() {
| test | train | 2016-01-27T18:55:43 | 2015-12-16T02:53:22Z | louiscryan | val |
netty/netty/4755_4769 | netty/netty | netty/netty/4755 | netty/netty/4769 | [
"timestamp(timedelta=23.0, similarity=0.8896217346671011)"
] | ee2558bdf3a27871a3fb61a02b48aa6850bcb5ca | b2a5bf8a7fd28600f40d6d7f9ccf54c25a1cea02 | [
"@mrniko you are right... I will fix it. In the meantime just create a new instance everytime.\n",
"@mrniko PTAL https://github.com/netty/netty/pull/4769\n",
"@normanmaurer PTAL ? Sorry, what? :)\n",
"@mrniko Please take a look ;) \n",
"@mrniko actually only WebSocketClientCompressionHandler can be `@Sharab... | [
"What is the need for the `INSTANCE` member if it can't be reused and the constructor is public?\n",
"ups sorry missed to remove it ... \n"
] | 2016-01-27T08:06:13Z | [
"improvement"
] | Add @Sharable annotation to WebSocketClientCompressionHandler | Netty version: netty-4.1.0.CR1
Unable to use WebSocketServerCompressionHandler and WebSocketServerCompressionHandler handlers as single instances. Because of absence of @Sharable annotation.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/WebSocketClientCompressionHandler.java",
"example/src/main/java/io/netty/example/http/websocketx/client/WebSocketClient.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/WebSocketClientCompressionHandler.java",
"example/src/main/java/io/netty/example/http/websocketx/client/WebSocketClient.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/WebSocketClientCompressionHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/WebSocketClientCompressionHandler.java
index 3596ab658fa..bfd0375accf 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/WebSocketClientCompressionHandler.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/extensions/compression/WebSocketClientCompressionHandler.java
@@ -15,6 +15,7 @@
*/
package io.netty.handler.codec.http.websocketx.extensions.compression;
+import io.netty.channel.ChannelHandler;
import io.netty.handler.codec.http.websocketx.extensions.WebSocketClientExtensionHandler;
/**
@@ -23,12 +24,12 @@
*
* See <tt>io.netty.example.http.websocketx.client.WebSocketClient</tt> for usage.
*/
-public class WebSocketClientCompressionHandler extends WebSocketClientExtensionHandler {
+@ChannelHandler.Sharable
+public final class WebSocketClientCompressionHandler extends WebSocketClientExtensionHandler {
- /**
- * Constructor with default configuration.
- */
- public WebSocketClientCompressionHandler() {
+ public static final WebSocketClientCompressionHandler INSTANCE = new WebSocketClientCompressionHandler();
+
+ private WebSocketClientCompressionHandler() {
super(new PerMessageDeflateClientExtensionHandshaker(),
new DeflateFrameClientExtensionHandshaker(false),
new DeflateFrameClientExtensionHandshaker(true));
diff --git a/example/src/main/java/io/netty/example/http/websocketx/client/WebSocketClient.java b/example/src/main/java/io/netty/example/http/websocketx/client/WebSocketClient.java
index 4c56d6c3ae1..8aa3b5cdbc2 100644
--- a/example/src/main/java/io/netty/example/http/websocketx/client/WebSocketClient.java
+++ b/example/src/main/java/io/netty/example/http/websocketx/client/WebSocketClient.java
@@ -113,7 +113,7 @@ protected void initChannel(SocketChannel ch) {
p.addLast(
new HttpClientCodec(),
new HttpObjectAggregator(8192),
- new WebSocketClientCompressionHandler(),
+ WebSocketClientCompressionHandler.INSTANCE,
handler);
}
});
| null | train | train | 2016-01-28T08:55:28 | 2016-01-26T11:19:46Z | mrniko | val |
netty/netty/4705_4772 | netty/netty | netty/netty/4705 | netty/netty/4772 | [
"timestamp(timedelta=65338.0, similarity=0.8604314867155635)"
] | bed84dde7a0e174abe684ba3ba2e7270c64204ef | f25342cdb235661d5c51c6c8c4b4b815dd1c8e93 | [
"Fixed in #4706\n",
"Corresponding Jython bug: http://bugs.jython.org/issue2401 - we should be able to build shortly, then Nick can see if he reproduce in his environment\n",
"Can we also merge this into 4.0?\n",
"This will need a more general fix... Looking into it now.\n\nAs workaround you can add a task o ... | [] | 2016-01-27T14:27:35Z | [
"defect"
] | Race condition in SslHandler | Originally a thread on the mailing list: https://groups.google.com/forum/#!topic/netty/h2ulbfw3uF8
(using Netty 4.0.33, via jython 2.7.1)
I'm wondering if this is a race condition/bug within netty itself or perhaps just a misuse of netty. I'm seeing the following stack trace intermittently when dealing with ssl sockets via jython:
```
java.lang.NullPointerException
at org.python.netty.handler.ssl.SslHandler.wrapNonAppData(SslHandler.java:585)
at org.python.netty.handler.ssl.SslHandler.handshake(SslHandler.java:1405)
at org.python.netty.handler.ssl.SslHandler.channelActive(SslHandler.java:1443)
at org.python.netty.channel.AbstractChannelHandlerContext.invokeChannelActive(AbstractChannelHandlerContext.java:193)
at org.python.netty.channel.AbstractChannelHandlerContext.fireChannelActive(AbstractChannelHandlerContext.java:179)
at org.python.netty.channel.DefaultChannelPipeline.fireChannelActive(DefaultChannelPipeline.java:817)
at org.python.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:260)
at org.python.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:290)
at org.python.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at org.python.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at org.python.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at org.python.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at org.python.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
```
You can ignore the 'org.python' prefix on the stack trace, thats just because it's running via Jython.
I did some code diving and it looks to me like this is a possible race condition between SslHandler and DefaultChannelPipeline. What I think is happening is this:
In jython, we create an SslHandler object. We then add that handler to our pipeline by calling addFirst:
https://github.com/netty/netty/blob/netty-4.0.33.Final/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java#L89
Eventually, that ends up calling callHandlerAdded:
https://github.com/netty/netty/blob/netty-4.0.33.Final/transport/src/main/java/io/netty/channel/DefaultChannelPipeline.java#L472
The behavior of callHandlerAdded depends on if you are in the event loop or not. If you are, it immediately calls 'handlerAdded' on the handler, and if not, it submits a task to a threadpool to that eventually calls handlerAdded. This is where I think the race condition is. SslHandler.handlerAdded(...) needs to be called _before_ SslHandler.channelActive(...) is called. Otherwise, you will get the NPE I pasted above becase 'this.ctx' isn't set on the SslHandler object, that's done in the handlerAdded method.
| [
"src/main/java/org/jboss/netty/handler/ssl/SslHandler.java"
] | [
"src/main/java/org/jboss/netty/handler/ssl/SslHandler.java"
] | [] | diff --git a/src/main/java/org/jboss/netty/handler/ssl/SslHandler.java b/src/main/java/org/jboss/netty/handler/ssl/SslHandler.java
index 25ff7659f98..b5b1d1d6367 100644
--- a/src/main/java/org/jboss/netty/handler/ssl/SslHandler.java
+++ b/src/main/java/org/jboss/netty/handler/ssl/SslHandler.java
@@ -1005,8 +1005,9 @@ private void wrap(ChannelHandlerContext context, Channel channel) throws SSLExce
}
if (!success) {
- IllegalStateException cause =
- new IllegalStateException("SSLEngine already closed");
+ Exception cause = channel.isOpen()
+ ? new SSLException("SSLEngine already closed")
+ : new ClosedChannelException();
// Check if we had a pendingWrite in process, if so we need to also notify as otherwise
// the ChannelFuture will never get notified
| null | train | train | 2015-10-13T12:09:11 | 2016-01-13T22:13:45Z | nickmbailey | val |
netty/netty/4746_4791 | netty/netty | netty/netty/4746 | netty/netty/4791 | [
"timestamp(timedelta=21.0, similarity=0.8585134029208575)"
] | a06708f81bc69ae6fbbe2638b25e7eee922f5efc | bd290e904f44553366085a4589f217302cff2c47 | [
"@jknair I think this is the same issue as :\n\nhttps://github.com/netty/netty/issues/4722\n\nLooking into it atm.\n",
"@jknair can you share a \"full\" reproducer ?\n",
"sorry for the delay, I will try to make one. Problem is I cant find any SNI enabled public host that will fail the handshake \n",
"@jknair ... | [] | 2016-01-29T19:43:07Z | [
"defect"
] | Enabling SNI, OpenSSL client from netty does not work | Netty version : 4.1 - ff11fe8 also tried with netty-4.1.0.Beta8
When trying to connect to a host with SNI enabled from the netty openssl client, handshake fails as :
```
OpenSSL error: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
```
same issue when you use the openssl s_client :
```
openssl s_client -connect sni-enabled-host:443
140133327337120:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:770:
```
and works well for :
```
openssl s_client -connect sni-enabled-host:443 -servername sni-enabled-host
```
The following code fails with Openssl and gives the same error but works fine with Openssl disabled (removed netty-tcnative)
```
SSLEngine sslEngine = sslCtx.newEngine(ctx.channel().alloc(), peerHost,peerPort));
```
```
java -version
java version "1.8.0_66"
uname -a
Linux linuxbox 4.2.1-040201-generic #201509211431 SMP Mon Sep 21 18:34:44 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
openssl version
OpenSSL 1.0.1f 6 Jan 2014
```
| [
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java",
"pom.xml"
] | [
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java",
"pom.xml"
] | [
"handler/src/test/java/io/netty/handler/ssl/SniClientTest.java"
] | diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
index d70c9525443..0644023ee7b 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
@@ -222,6 +222,12 @@ public OpenSslEngine(long sslCtx, ByteBufAllocator alloc,
// Set the client auth mode, this needs to be done via setClientAuth(...) method so we actually call the
// needed JNI methods.
setClientAuth(clientMode ? ClientAuth.NONE : checkNotNull(clientAuth, "clientAuth"));
+
+ // Use SNI if peerHost was specified
+ // See https://github.com/netty/netty/issues/4746
+ if (clientMode && peerHost != null) {
+ SSL.setTlsExtHostName(ssl, peerHost);
+ }
}
@Override
diff --git a/pom.xml b/pom.xml
index e5ae5d4f92a..be65203dbbf 100644
--- a/pom.xml
+++ b/pom.xml
@@ -216,7 +216,7 @@
<!-- Fedora-"like" systems. This is currently only used for the netty-tcnative dependency -->
<os.detection.classifierWithLikes>fedora</os.detection.classifierWithLikes>
<tcnative.artifactId>netty-tcnative</tcnative.artifactId>
- <tcnative.version>1.1.33.Fork11</tcnative.version>
+ <tcnative.version>1.1.33.Fork12</tcnative.version>
<tcnative.classifier>${os.detected.classifier}</tcnative.classifier>
<epoll.classifier>${os.detected.name}-${os.detected.arch}</epoll.classifier>
</properties>
| diff --git a/handler/src/test/java/io/netty/handler/ssl/SniClientTest.java b/handler/src/test/java/io/netty/handler/ssl/SniClientTest.java
new file mode 100644
index 00000000000..2af2da48b12
--- /dev/null
+++ b/handler/src/test/java/io/netty/handler/ssl/SniClientTest.java
@@ -0,0 +1,104 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.ssl;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.bootstrap.ServerBootstrap;
+import io.netty.buffer.ByteBufAllocator;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelInitializer;
+import io.netty.channel.DefaultEventLoopGroup;
+import io.netty.channel.EventLoopGroup;
+import io.netty.channel.local.LocalAddress;
+import io.netty.channel.local.LocalChannel;
+import io.netty.channel.local.LocalServerChannel;
+import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
+import io.netty.handler.ssl.util.SelfSignedCertificate;
+import io.netty.util.Mapping;
+import io.netty.util.concurrent.Promise;
+import org.junit.Assert;
+import org.junit.Assume;
+import org.junit.Test;
+
+public class SniClientTest {
+
+ @Test
+ public void testSniClientJdkSslServerJdkSsl() throws Exception {
+ testSniClient(SslProvider.JDK, SslProvider.JDK);
+ }
+
+ @Test
+ public void testSniClientOpenSslServerOpenSsl() throws Exception {
+ Assume.assumeTrue(OpenSsl.isAvailable());
+ testSniClient(SslProvider.OPENSSL, SslProvider.OPENSSL);
+ }
+
+ @Test
+ public void testSniClientJdkSslServerOpenSsl() throws Exception {
+ Assume.assumeTrue(OpenSsl.isAvailable());
+ testSniClient(SslProvider.JDK, SslProvider.OPENSSL);
+ }
+
+ @Test
+ public void testSniClientOpenSslServerJdkSsl() throws Exception {
+ Assume.assumeTrue(OpenSsl.isAvailable());
+ testSniClient(SslProvider.OPENSSL, SslProvider.JDK);
+ }
+
+ private static void testSniClient(SslProvider sslClientProvider, SslProvider sslServerProvider) throws Exception {
+ final String sniHost = "sni.netty.io";
+ LocalAddress address = new LocalAddress("test");
+ EventLoopGroup group = new DefaultEventLoopGroup(1);
+ Channel sc = null;
+ Channel cc = null;
+ try {
+ SelfSignedCertificate cert = new SelfSignedCertificate();
+ final SslContext sslServerContext = SslContextBuilder.forServer(cert.key(), cert.cert())
+ .sslProvider(sslServerProvider).build();
+
+ final Promise<String> promise = group.next().newPromise();
+ ServerBootstrap sb = new ServerBootstrap();
+ sc = sb.group(group).channel(LocalServerChannel.class).childHandler(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ch.pipeline().addFirst(new SniHandler(new Mapping<String, SslContext>() {
+ @Override
+ public SslContext map(String input) {
+ promise.setSuccess(input);
+ return sslServerContext;
+ }
+ }));
+ }
+ }).bind(address).syncUninterruptibly().channel();
+
+ SslContext sslContext = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE)
+ .sslProvider(sslClientProvider).build();
+ Bootstrap cb = new Bootstrap();
+ cc = cb.group(group).channel(LocalChannel.class).handler(new SslHandler(
+ sslContext.newEngine(ByteBufAllocator.DEFAULT, sniHost, -1)))
+ .connect(address).syncUninterruptibly().channel();
+ Assert.assertEquals(sniHost, promise.syncUninterruptibly().getNow());
+ } finally {
+ if (cc != null) {
+ cc.close();
+ }
+ if (sc != null) {
+ sc.close();
+ }
+ group.shutdownGracefully();
+ }
+ }
+}
| train | train | 2016-02-02T21:43:38 | 2016-01-22T16:08:35Z | jknair | val |
netty/netty/4828_4843 | netty/netty | netty/netty/4828 | netty/netty/4843 | [
"timestamp(timedelta=36.0, similarity=0.8991627586810774)"
] | 027e8224e41f85cc125bf1d69f15c342c997ccac | 885dffa64cdc4e7aaa2402b81208ffeefb20d0a7 | [
"@floragunncom which java version and java brand ?\n",
"Oracle 1.8.0_45 on Mac OS Yosemite\n\njava version \"1.8.0_45\"\nJava(TM) SE Runtime Environment (build 1.8.0_45-b14)\nJava HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)\n\nI need to create a new Open SSL Engine and i do it with:\n\n<pre>\nsslCo... | [] | 2016-02-05T07:05:29Z | [
"defect"
] | OpenSslContext throws UnsupportedOperationException when Unsafe not available | Netty 4.0.34.Final on OS X, Unsafe not available due to SecurityManager and restrictive policy file.
So this warning is issued: "[io.netty.util.internal.PlatformDependent] Your platform does not provide complete low-level API for accessing direct buffers reliably. Unless explicitly requested, heap buffer will always be preferred to avoid potential system unstability."
OpenSslContext constructor fails with a UnsupportedOperationException because in OpenSslContext.java:542 "UnpooledDirectByteBuf.memoryAddress()" is not supported.
<pre>
Caused by: java.lang.UnsupportedOperationException
at io.netty.buffer.UnpooledDirectByteBuf.memoryAddress(UnpooledDirectByteBuf.java:210)
at io.netty.handler.ssl.OpenSslContext.newBIO(OpenSslContext.java:542)
at io.netty.handler.ssl.OpenSslContext.toBIO(OpenSslContext.java:533)
at io.netty.handler.ssl.OpenSslClientContext.<init>(OpenSslClientContext.java:289)
</pre>
| [
"handler/src/main/java/io/netty/handler/ssl/OpenSsl.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/OpenSsl.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java b/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java
index 619768aa67a..4b67ead89fc 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java
@@ -16,9 +16,11 @@
package io.netty.handler.ssl;
+import io.netty.buffer.ByteBuf;
import io.netty.util.internal.NativeLibraryLoader;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
+import org.apache.tomcat.jni.Buffer;
import org.apache.tomcat.jni.Library;
import org.apache.tomcat.jni.Pool;
import org.apache.tomcat.jni.SSL;
@@ -190,5 +192,10 @@ static boolean isError(long errorCode) {
return errorCode != SSL.SSL_ERROR_NONE;
}
+ static long memoryAddress(ByteBuf buf) {
+ assert buf.isDirect();
+ return buf.hasMemoryAddress() ? buf.memoryAddress() : Buffer.address(buf.nioBuffer());
+ }
+
private OpenSsl() { }
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
index 22b74f2179a..8b3c7ea656f 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
@@ -544,7 +544,7 @@ static long toBIO(X509Certificate[] certChain) throws Exception {
private static long newBIO(ByteBuf buffer) throws Exception {
long bio = SSL.newMemBIO();
int readable = buffer.readableBytes();
- if (SSL.writeToBIO(bio, buffer.memoryAddress(), readable) != readable) {
+ if (SSL.writeToBIO(bio, OpenSsl.memoryAddress(buffer), readable) != readable) {
SSL.freeBIO(bio);
throw new IllegalStateException("Could not write data to memory BIO");
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
index 420581661da..a55b9c8c1c5 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
@@ -52,6 +52,7 @@
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
import static io.netty.handler.ssl.ApplicationProtocolConfig.SelectedListenerFailureBehavior;
+import static io.netty.handler.ssl.OpenSsl.memoryAddress;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static javax.net.ssl.SSLEngineResult.HandshakeStatus.*;
import static javax.net.ssl.SSLEngineResult.Status.*;
@@ -1235,14 +1236,6 @@ private HandshakeStatus handshake() throws SSLException {
return FINISHED;
}
- private static long memoryAddress(ByteBuf buf) {
- if (buf.hasMemoryAddress()) {
- return buf.memoryAddress();
- } else {
- return Buffer.address(buf.nioBuffer());
- }
- }
-
private SSLEngineResult.Status getEngineStatus() {
return engineClosed? CLOSED : OK;
}
| null | train | train | 2016-02-04T15:34:06 | 2016-02-03T18:47:57Z | floragunn | val |
netty/netty/4834_4844 | netty/netty | netty/netty/4834 | netty/netty/4844 | [
"timestamp(timedelta=19.0, similarity=0.8923300350807587)"
] | 75a2ddd61c3df0b7a1e6db1cc5c588177b24c3cf | 4198a453b43c1a19a0f1d968e27f25e627c65caf | [
"cc: @nmittler & @normanmaurer\n",
"Why not use a CHMv8 here? It seems the original concern was `AddressResolver` instantiation cost, but that's what `computeIfAbsent` is for. This would let us remove all the synchronized blocks.\n",
"Because it's java8 only... Will fix the race though\n\n> Am 03.02.2016 um 23:... | [] | 2016-02-05T07:06:20Z | [
"defect"
] | Race in AddressResolverGroup | https://github.com/netty/netty/blob/4.1/resolver/src/main/java/io/netty/resolver/AddressResolverGroup.java
Here, `resolvers` is synchronized when calling .get(), but not in the future callback `resolvers.remove(executor);`
| [
"resolver/src/main/java/io/netty/resolver/AddressResolverGroup.java"
] | [
"resolver/src/main/java/io/netty/resolver/AddressResolverGroup.java"
] | [] | diff --git a/resolver/src/main/java/io/netty/resolver/AddressResolverGroup.java b/resolver/src/main/java/io/netty/resolver/AddressResolverGroup.java
index 89b00dfa044..509475eb3ea 100644
--- a/resolver/src/main/java/io/netty/resolver/AddressResolverGroup.java
+++ b/resolver/src/main/java/io/netty/resolver/AddressResolverGroup.java
@@ -73,7 +73,9 @@ public AddressResolver<T> getResolver(final EventExecutor executor) {
executor.terminationFuture().addListener(new FutureListener<Object>() {
@Override
public void operationComplete(Future<Object> future) throws Exception {
- resolvers.remove(executor);
+ synchronized (resolvers) {
+ resolvers.remove(executor);
+ }
newResolver.close();
}
});
| null | train | train | 2016-02-04T16:51:44 | 2016-02-03T21:44:39Z | carl-mastrangelo | val |
netty/netty/4835_4852 | netty/netty | netty/netty/4835 | netty/netty/4852 | [
"timestamp(timedelta=33.0, similarity=0.917887492488813)"
] | 61f812ea2a38fe20845ae72cbee8729606e01aa3 | 9180686bac28a7f5c64ffdd03f758e3b529d4015 | [
"sgtm\n",
"Fixed in #4852\n",
"Thanks for the patch @windie !\n"
] | [] | 2016-02-06T00:55:08Z | [
"defect"
] | CorsConfigBuilder allowNullOrigin method should be public instead of package private | Netty version: 4.1.0-CR1
Class:
io.netty.handler.codec.http.cors.CorsConfigBuilder
Method:
method "allowNullOrigin" should be public but it is package private (no modifier)
| [
"codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java
index 81e0a8a2137..52457b32317 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfigBuilder.java
@@ -103,7 +103,7 @@ public static CorsConfigBuilder forOrigins(final String... origins) {
*
* @return {@link CorsConfigBuilder} to support method chaining.
*/
- CorsConfigBuilder allowNullOrigin() {
+ public CorsConfigBuilder allowNullOrigin() {
allowNullOrigin = true;
return this;
}
| null | val | train | 2016-02-05T23:39:55 | 2016-02-04T06:18:39Z | odbuser2 | val |
netty/netty/3435_4853 | netty/netty | netty/netty/3435 | netty/netty/4853 | [
"timestamp(timedelta=247.0, similarity=0.9668094919673911)"
] | a15ff32608f000b5b01ab00262f8b2df34b2d08c | 2649f999632d61d1b76b38f51652c9d22dbbedfa | [
"Thanks for capturing this issue @danbev!\n\nJust as #3271 added the ability to automatically combine and CSV escape values associated with the same header name; we should also add the ability to automatically unescape values returned by the accessor methods. For example `CharSequence get(Charsequence name)` in `D... | [
"++i would be better.\n",
"It doesn't look like the `or the value unchanged` is honored like in `escapeCsv`\n",
"Doesn't really matter. Javac generates the same bytes for both `i++` and `++i` if the return value is not used.\n",
"Fixed. Now just return the value if it's not escaped.\n",
"Remove the else\n",... | 2016-02-06T22:36:36Z | [
"improvement"
] | Add unescapeCsv method to StringUtil. | For the solution for #3237 involved adding a method named `escapeCsv` to StringUtil. We should also add the counter part of this, `unescapeCsv`, to StringUtil.
| [
"common/src/main/java/io/netty/util/internal/StringUtil.java"
] | [
"common/src/main/java/io/netty/util/internal/StringUtil.java"
] | [
"common/src/test/java/io/netty/util/internal/StringUtilTest.java"
] | diff --git a/common/src/main/java/io/netty/util/internal/StringUtil.java b/common/src/main/java/io/netty/util/internal/StringUtil.java
index 473ba597d1f..e60c734bb00 100644
--- a/common/src/main/java/io/netty/util/internal/StringUtil.java
+++ b/common/src/main/java/io/netty/util/internal/StringUtil.java
@@ -387,6 +387,67 @@ public static CharSequence escapeCsv(CharSequence value) {
escaped.append(DOUBLE_QUOTE) : value;
}
+ /**
+ * Unescapes the specified escaped CSV field, if necessary according to
+ * <a href="https://tools.ietf.org/html/rfc4180#section-2">RFC-4180</a>.
+ *
+ * @param value The escaped CSV field which will be unescaped according to
+ * <a href="https://tools.ietf.org/html/rfc4180#section-2">RFC-4180</a>
+ * @return {@link CharSequence} the unescaped value if necessary, or the value unchanged
+ */
+ public static CharSequence unescapeCsv(CharSequence value) {
+ int length = checkNotNull(value, "value").length();
+ if (length == 0) {
+ return value;
+ }
+ int last = length - 1;
+ boolean quoted = isDoubleQuote(value.charAt(0)) && isDoubleQuote(value.charAt(last)) && length != 1;
+ if (!quoted) {
+ validateCsvFormat(value);
+ return value;
+ }
+ StringBuilder unescaped = InternalThreadLocalMap.get().stringBuilder();
+ for (int i = 1; i < last; i++) {
+ char current = value.charAt(i);
+ if (current == DOUBLE_QUOTE) {
+ if (isDoubleQuote(value.charAt(i + 1)) && (i + 1) != last) {
+ // Followed by a double-quote but not the last character
+ // Just skip the next double-quote
+ i++;
+ } else {
+ // Not followed by a double-quote or the following double-quote is the last character
+ throw newInvalidEscapedCsvFieldException(value, i);
+ }
+ }
+ unescaped.append(current);
+ }
+ return unescaped.toString();
+ }
+
+ /**
+ * Validate if {@code value} is a valid csv field without double-quotes.
+ *
+ * @throws IllegalArgumentException if {@code value} needs to be encoded with double-quotes.
+ */
+ private static void validateCsvFormat(CharSequence value) {
+ int length = value.length();
+ for (int i = 0; i < length; i++) {
+ switch (value.charAt(i)) {
+ case DOUBLE_QUOTE:
+ case LINE_FEED:
+ case CARRIAGE_RETURN:
+ case COMMA:
+ // If value contains any special character, it should be enclosed with double-quotes
+ throw newInvalidEscapedCsvFieldException(value, i);
+ default:
+ }
+ }
+ }
+
+ private static IllegalArgumentException newInvalidEscapedCsvFieldException(CharSequence value, int index) {
+ return new IllegalArgumentException("invalid escaped CSV field: " + value + " index: " + index);
+ }
+
/**
* Get the length of a string, {@code null} input is considered {@code 0} length.
*/
| diff --git a/common/src/test/java/io/netty/util/internal/StringUtilTest.java b/common/src/test/java/io/netty/util/internal/StringUtilTest.java
index 66229dacb17..3084094f00a 100644
--- a/common/src/test/java/io/netty/util/internal/StringUtilTest.java
+++ b/common/src/test/java/io/netty/util/internal/StringUtilTest.java
@@ -321,6 +321,61 @@ private static void escapeCsv(CharSequence value, CharSequence expected) {
}
}
+ @Test
+ public void testUnescapeCsv() {
+ assertEquals("", unescapeCsv(""));
+ assertEquals("\"", unescapeCsv("\"\"\"\""));
+ assertEquals("\"\"", unescapeCsv("\"\"\"\"\"\""));
+ assertEquals("\"\"\"", unescapeCsv("\"\"\"\"\"\"\"\""));
+ assertEquals("\"netty\"", unescapeCsv("\"\"\"netty\"\"\""));
+ assertEquals("netty", unescapeCsv("netty"));
+ assertEquals("netty", unescapeCsv("\"netty\""));
+ assertEquals("\r", unescapeCsv("\"\r\""));
+ assertEquals("\n", unescapeCsv("\"\n\""));
+ assertEquals("hello,netty", unescapeCsv("\"hello,netty\""));
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvWithSingleQuote() {
+ unescapeCsv("\"");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvWithOddQuote() {
+ unescapeCsv("\"\"\"");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvWithCRAndWithoutQuote() {
+ unescapeCsv("\r");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvWithLFAndWithoutQuote() {
+ unescapeCsv("\n");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvWithCommaAndWithoutQuote() {
+ unescapeCsv(",");
+ }
+
+ @Test
+ public void escapeCsvAndUnEscapeCsv() {
+ assertEscapeCsvAndUnEscapeCsv("");
+ assertEscapeCsvAndUnEscapeCsv("netty");
+ assertEscapeCsvAndUnEscapeCsv("hello,netty");
+ assertEscapeCsvAndUnEscapeCsv("hello,\"netty\"");
+ assertEscapeCsvAndUnEscapeCsv("\"");
+ assertEscapeCsvAndUnEscapeCsv(",");
+ assertEscapeCsvAndUnEscapeCsv("\r");
+ assertEscapeCsvAndUnEscapeCsv("\n");
+ }
+
+ private void assertEscapeCsvAndUnEscapeCsv(String value) {
+ assertEquals(value, unescapeCsv(StringUtil.escapeCsv(value)));
+ }
+
@Test
public void testSimpleClassName() throws Exception {
testSimpleClassName(String.class);
| test | train | 2016-02-08T06:23:29 | 2015-02-18T08:04:21Z | danbev | val |
netty/netty/4856_4857 | netty/netty | netty/netty/4856 | netty/netty/4857 | [
"timestamp(timedelta=10.0, similarity=0.9219346625170636)"
] | 36aa11937d661385461b4c1c488356347751e9f9 | 68bf2c1df4a87e8ac411c4d6383313dd96577624 | [
"@ejona86 - Thanks for reporting! See https://github.com/netty/netty/pull/4857\n"
] | [
"Wait ... is this right? I think we still need something here to make sure that the other endpoint doesn't know about the stream (see https://github.com/netty/netty/pull/4502). Or can we not get into `IDLE` via a remote-initiated stream?\n",
"#4502 was necessary because we used to queue the initial headers frames... | 2016-02-09T22:54:32Z | [
"defect"
] | http2: f990f99 broke sending RST_STREAM | In f990f99 (for #4758), Http2ConnectionHandler was changed from:
``` java
public ChannelFuture resetStream(...) {
...
if (stream.state() == IDLE || (connection().local().created(stream) && !stream.isHeaderSent())) {
// The other endpoint doesn't know about the stream yet, so we can't actually send
// the RST_STREAM frame. The HTTP/2 spec also disallows sending RST_STREAM for IDLE streams.
```
To:
``` java
public ChannelFuture resetStream(...) {
...
if (stream.state() == IDLE || connection().local().created(stream)) {
// The other endpoint doesn't know about the stream yet, so we can't actually send
// the RST_STREAM frame. The HTTP/2 spec also disallows sending RST_STREAM for IDLE streams.
```
However, the logic is now invalid, in that if the stream is created the RST_STREAM _won't_ be sent. Instead, the second part of the condition should be removed entirely and only base on `stream.state() == IDLE`.
This makes it hard to update to CR2.
@Scottmitch
@nmittler
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 8e98c3cfcd9..e8f92c14b70 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -617,9 +617,8 @@ public ChannelFuture resetStream(final ChannelHandlerContext ctx, int streamId,
}
final ChannelFuture future;
- if (stream.state() == IDLE || connection().local().created(stream)) {
- // The other endpoint doesn't know about the stream yet, so we can't actually send
- // the RST_STREAM frame. The HTTP/2 spec also disallows sending RST_STREAM for IDLE streams.
+ if (stream.state() == IDLE) {
+ // We cannot write RST_STREAM frames on IDLE streams https://tools.ietf.org/html/rfc7540#section-6.4.
future = promise.setSuccess();
} else {
future = frameWriter().writeRstStream(ctx, streamId, errorCode, promise);
@@ -629,17 +628,16 @@ public ChannelFuture resetStream(final ChannelHandlerContext ctx, int streamId,
// from resulting in multiple reset frames being sent.
stream.resetSent();
- future.addListener(new ChannelFutureListener() {
- @Override
- public void operationComplete(ChannelFuture future) throws Exception {
- if (future.isSuccess()) {
- closeStream(stream, promise);
- } else {
- // The connection will be closed and so no need to change the resetSent flag to false.
- onConnectionError(ctx, future.cause(), null);
+ if (future.isDone()) {
+ processRstStreamWriteResult(ctx, stream, future);
+ } else {
+ future.addListener(new ChannelFutureListener() {
+ @Override
+ public void operationComplete(ChannelFuture future) throws Exception {
+ processRstStreamWriteResult(ctx, stream, future);
}
- }
- });
+ });
+ }
return future;
}
@@ -688,10 +686,10 @@ private void checkCloseConnection(ChannelFuture future) {
// If this connection is closing and the graceful shutdown has completed, close the connection
// once this operation completes.
if (closeListener != null && isGracefulShutdownComplete()) {
- ChannelFutureListener closeListener = Http2ConnectionHandler.this.closeListener;
+ ChannelFutureListener closeListener = this.closeListener;
// This method could be called multiple times
// and we don't want to notify the closeListener multiple times.
- Http2ConnectionHandler.this.closeListener = null;
+ this.closeListener = null;
try {
closeListener.operationComplete(future);
} catch (Exception e) {
@@ -717,6 +715,15 @@ private ChannelFuture goAway(ChannelHandlerContext ctx, Http2Exception cause) {
return goAway(ctx, lastKnownStream, errorCode, debugData, ctx.newPromise());
}
+ private void processRstStreamWriteResult(ChannelHandlerContext ctx, Http2Stream stream, ChannelFuture future) {
+ if (future.isSuccess()) {
+ closeStream(stream, future);
+ } else {
+ // The connection will be closed and so no need to change the resetSent flag to false.
+ onConnectionError(ctx, future.cause(), null);
+ }
+ }
+
/**
* Returns the client preface string if this is a client connection, otherwise returns {@code null}.
*/
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
index 5dcd0ccbc18..c17cc4d8819 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
@@ -43,6 +43,7 @@
import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
import static io.netty.handler.codec.http2.Http2Error.STREAM_CLOSED;
import static io.netty.handler.codec.http2.Http2Stream.State.CLOSED;
+import static io.netty.handler.codec.http2.Http2Stream.State.IDLE;
import static io.netty.util.CharsetUtil.UTF_8;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
@@ -321,6 +322,15 @@ public void writeRstOnClosedStreamShouldSucceed() throws Exception {
verify(frameWriter).writeRstStream(eq(ctx), eq(STREAM_ID), anyLong(), any(ChannelPromise.class));
}
+ @Test
+ public void writeRstOnIdleStreamShouldNotWriteButStillSucceed() throws Exception {
+ handler = newHandler();
+ when(stream.state()).thenReturn(IDLE);
+ handler.resetStream(ctx, STREAM_ID, STREAM_CLOSED.code(), promise);
+ verify(frameWriter, never()).writeRstStream(eq(ctx), eq(STREAM_ID), anyLong(), any(ChannelPromise.class));
+ verify(stream).close();
+ }
+
@SuppressWarnings("unchecked")
@Test
public void closeListenerShouldBeNotifiedOnlyOneTime() throws Exception {
| train | train | 2016-02-09T00:21:24 | 2016-02-09T19:52:25Z | ejona86 | val |
netty/netty/4838_4860 | netty/netty | netty/netty/4838 | netty/netty/4860 | [
"timestamp(timedelta=4.0, similarity=0.9871739659685546)"
] | cd56f87ca12664fdec3aba8b44ee425cac5cd4be | bf3a624cc5804be9e39e0f0805f4d11c4a06d3d3 | [
"@nmittler - FYI\n\nrelated to https://github.com/netty/netty/issues/4780#issuecomment-179849253\n",
"@Scottmitch yikes! Are you working on a patch?\n",
"@nmittler - I was planning on it, but haven't got around to it yet....does this one peek your interest?\n",
"pr pending\n"
] | [
"Why not just take a `ChannelPromise`?\n",
"Just curious ... not sure how Netty handles this case in general: what if `promise` is a void promise? If the caller tries to call `addListener` it will fail ... maybe we don't care, since they're the one that passed in the void promise?\n",
"Don't we have a light-wei... | 2016-02-10T23:51:03Z | [
"defect"
] | HTTP/2 codec may not always call Http2Connection$Listener.onStreamRemoved | If `Http2Connection$Listener.onStreamAdded` is called it is not always the case that `Http2Connection$Listener.onStreamRemoved` will be called. Some use cases (including `InboundHttp2ToHttpAdapter`) depend upon the `onStreamRemoved` method to be called in order to clean up allocated buffers.
[Http2ConnectionHandler.channelInactive](https://github.com/netty/netty/blob/4.1/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java#L180) currently only iterates over active streams, and this does not account for streams which may still exist but are not active.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 589546dbe1b..6dae971b4a5 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -15,28 +15,16 @@
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.http2.Http2CodecUtil.CONNECTION_STREAM_ID;
-import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
-import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_WEIGHT;
-import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
-import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
-import static io.netty.handler.codec.http2.Http2Error.REFUSED_STREAM;
-import static io.netty.handler.codec.http2.Http2Exception.closedStreamError;
-import static io.netty.handler.codec.http2.Http2Exception.connectionError;
-import static io.netty.handler.codec.http2.Http2Exception.streamError;
-import static io.netty.handler.codec.http2.Http2Stream.State.CLOSED;
-import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_LOCAL;
-import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_REMOTE;
-import static io.netty.handler.codec.http2.Http2Stream.State.IDLE;
-import static io.netty.handler.codec.http2.Http2Stream.State.OPEN;
-import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_LOCAL;
-import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_REMOTE;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.http2.Http2Stream.State;
import io.netty.util.collection.IntCollections;
import io.netty.util.collection.IntObjectHashMap;
import io.netty.util.collection.IntObjectMap;
+import io.netty.util.collection.IntObjectMap.PrimitiveEntry;
+import io.netty.util.concurrent.Future;
+import io.netty.util.concurrent.FutureListener;
+import io.netty.util.concurrent.Promise;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.SystemPropertyUtil;
@@ -51,6 +39,25 @@
import java.util.List;
import java.util.Queue;
import java.util.Set;
+
+import static io.netty.handler.codec.http2.Http2CodecUtil.CONNECTION_STREAM_ID;
+import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
+import static io.netty.handler.codec.http2.Http2Error.INTERNAL_ERROR;
+import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
+import static io.netty.handler.codec.http2.Http2Error.REFUSED_STREAM;
+import static io.netty.handler.codec.http2.Http2Exception.closedStreamError;
+import static io.netty.handler.codec.http2.Http2Exception.connectionError;
+import static io.netty.handler.codec.http2.Http2Exception.streamError;
+import static io.netty.handler.codec.http2.Http2Stream.State.CLOSED;
+import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_LOCAL;
+import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_REMOTE;
+import static io.netty.handler.codec.http2.Http2Stream.State.IDLE;
+import static io.netty.handler.codec.http2.Http2Stream.State.OPEN;
+import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_LOCAL;
+import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_REMOTE;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static java.lang.Math.max;
/**
@@ -80,6 +87,7 @@ public class DefaultHttp2Connection implements Http2Connection {
*/
final List<Listener> listeners = new ArrayList<Listener>(4);
final ActiveStreams activeStreams;
+ Promise<Void> closePromise;
/**
* Creates a new connection with the given settings.
@@ -96,6 +104,71 @@ public DefaultHttp2Connection(boolean server) {
streamMap.put(connectionStream.id(), connectionStream);
}
+ /**
+ * Determine if {@link #close(Promise)} has been called and no more streams are allowed to be created.
+ */
+ final boolean isClosed() {
+ return closePromise != null;
+ }
+
+ @Override
+ public Future<Void> close(final Promise<Void> promise) {
+ checkNotNull(promise, "promise");
+ // Since we allow this method to be called multiple times, we must make sure that all the promises are notified
+ // when all streams are removed and the close operation completes.
+ if (closePromise != null) {
+ if (closePromise == promise) {
+ // Do nothing
+ } else if ((promise instanceof ChannelPromise) && ((ChannelPromise) closePromise).isVoid()) {
+ closePromise = promise;
+ } else {
+ closePromise.addListener(new FutureListener<Void>() {
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ if (future.isSuccess()) {
+ promise.trySuccess(null);
+ } else if (future.isCancelled()) {
+ promise.cancel(false);
+ } else {
+ promise.tryFailure(future.cause());
+ }
+ }
+ });
+ }
+ } else {
+ closePromise = promise;
+ }
+ if (isStreamMapEmpty()) {
+ promise.trySuccess(null);
+ return promise;
+ }
+ Iterator<PrimitiveEntry<Http2Stream>> itr = streamMap.entries().iterator();
+ // We must take care while iterating the streamMap as to not modify while iterating in case there are other code
+ // paths iterating over the active streams.
+ if (activeStreams.allowModifications()) {
+ while (itr.hasNext()) {
+ Http2Stream stream = itr.next().value();
+ if (stream.id() != CONNECTION_STREAM_ID) {
+ // If modifications of the activeStream map is allowed, then a stream close operation will also
+ // modify the streamMap. We must prevent concurrent modifications to the streamMap, so use the
+ // iterator to remove the current stream.
+ itr.remove();
+ stream.close();
+ }
+ }
+ } else {
+ while (itr.hasNext()) {
+ Http2Stream stream = itr.next().value();
+ if (stream.id() != CONNECTION_STREAM_ID) {
+ // We are not allowed to make modifications, so the close calls will be executed after this
+ // iteration completes.
+ stream.close();
+ }
+ }
+ }
+ return closePromise;
+ }
+
@Override
public void addListener(Listener listener) {
listeners.add(listener);
@@ -208,6 +281,13 @@ public boolean visit(Http2Stream stream) {
}
}
+ /**
+ * Determine if {@link #streamMap} only contains the connection stream.
+ */
+ private boolean isStreamMapEmpty() {
+ return streamMap.size() == 1;
+ }
+
/**
* Closed streams may stay in the priority tree if they have dependents that are in prioritizable states.
* When a stream is requested to be removed we can only actually remove that stream when there are no more
@@ -220,7 +300,6 @@ public boolean visit(Http2Stream stream) {
void removeStream(DefaultStream stream) {
// [1] Check if this stream can be removed because it has no prioritizable descendants.
if (stream.parent().removeChild(stream)) {
- // Remove it from the map and priority tree.
streamMap.remove(stream.id());
for (int i = 0; i < listeners.size(); i++) {
@@ -230,6 +309,10 @@ void removeStream(DefaultStream stream) {
logger.error("Caught RuntimeException from listener onStreamRemoved.", e);
}
}
+
+ if (closePromise != null && isStreamMapEmpty()) {
+ closePromise.trySuccess(null);
+ }
}
}
@@ -604,17 +687,16 @@ final void takeChild(DefaultStream child, boolean exclusive, List<ParentChangedE
// path is updated with the correct child.prioritizableForTree() value. Note that the removal operation
// may not be successful and may return null. This is because when an exclusive dependency is processed
// the children are removed in a previous recursive call but the child's parent link is updated here.
- if (oldParent != null && oldParent.children.remove(child.id()) != null) {
- if (!child.isDescendantOf(oldParent)) {
- oldParent.decrementPrioritizableForTree(child.prioritizableForTree());
- if (oldParent.prioritizableForTree() == 0) {
- // There are a few risks with immediately removing nodes from the priority tree:
- // 1. We are removing nodes while we are potentially shifting the tree. There are no
- // concrete cases known but is risky because it could invalidate the data structure.
- // 2. We are notifying listeners of the removal while the tree is in flux. Currently the
- // codec listeners make no assumptions about priority tree structure when being notified.
- removeStream(oldParent);
- }
+ if (oldParent != null && oldParent.children.remove(child.id()) != null &&
+ !child.isDescendantOf(oldParent)) {
+ oldParent.decrementPrioritizableForTree(child.prioritizableForTree());
+ if (oldParent.prioritizableForTree() == 0) {
+ // There are a few risks with immediately removing nodes from the priority tree:
+ // 1. We are removing nodes while we are potentially shifting the tree. There are no
+ // concrete cases known but is risky because it could invalidate the data structure.
+ // 2. We are notifying listeners of the removal while the tree is in flux. Currently the
+ // codec listeners make no assumptions about priority tree structure when being notified.
+ removeStream(oldParent);
}
}
@@ -1040,6 +1122,10 @@ private void checkNewStreamAllowed(int streamId, State state) throws Http2Except
if ((state.localSideOpen() || state.remoteSideOpen()) && !canOpenStream()) {
throw connectionError(REFUSED_STREAM, "Maximum active streams violated for this endpoint.");
}
+ if (isClosed()) {
+ throw connectionError(INTERNAL_ERROR, "Attempted to create stream id %d after connection was closed",
+ streamId);
+ }
}
private boolean isLocal() {
@@ -1066,7 +1152,6 @@ interface Event {
* active streams in order to prevent modification while iterating.
*/
private final class ActiveStreams {
-
private final List<Listener> listeners;
private final Queue<Event> pendingEvents = new ArrayDeque<Event>(4);
private final Set<Http2Stream> streams = new LinkedHashSet<Http2Stream>();
@@ -1157,7 +1242,7 @@ void removeFromActiveStreams(DefaultStream stream) {
removeStream(stream);
}
- private boolean allowModifications() {
+ boolean allowModifications() {
return pendingIterations == 0;
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
index 380ab329a74..419e216982c 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
@@ -16,6 +16,8 @@
package io.netty.handler.codec.http2;
import io.netty.buffer.ByteBuf;
+import io.netty.util.concurrent.Future;
+import io.netty.util.concurrent.Promise;
/**
* Manager for the state of an HTTP/2 connection with the remote end-point.
@@ -292,6 +294,16 @@ interface Endpoint<F extends Http2FlowController> {
interface PropertyKey {
}
+ /**
+ * Close this connection. No more new streams can be created after this point and
+ * all streams that exists (active or otherwise) will be closed and removed.
+ * <p>Note if iterating active streams via {@link #forEachActiveStream(Http2StreamVisitor)} and an exception is
+ * thrown it is necessary to call this method again to ensure the close completes.
+ * @param promise Will be completed when all streams have been removed, and listeners have been notified.
+ * @return A future that will be completed when all streams have been removed, and listeners have been notified.
+ */
+ Future<Void> close(Promise<Void> promise);
+
/**
* Creates a new key that is unique within this {@link Http2Connection}.
*/
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index e8f92c14b70..5c975af55a1 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -175,18 +175,9 @@ public void channelInactive(ChannelHandlerContext ctx) throws Exception {
encoder().close();
decoder().close();
- final Http2Connection connection = connection();
- // Check if there are streams to avoid the overhead of creating the ChannelFuture.
- if (connection.numActiveStreams() > 0) {
- final ChannelFuture future = ctx.newSucceededFuture();
- connection.forEachActiveStream(new Http2StreamVisitor() {
- @Override
- public boolean visit(Http2Stream stream) throws Http2Exception {
- closeStream(stream, future);
- return true;
- }
- });
- }
+ // We need to remove all streams (not just the active ones).
+ // See https://github.com/netty/netty/issues/4838.
+ connection().close(ctx.voidPromise());
}
/**
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index 2e68c39ff12..bb4ace388d9 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -15,6 +15,33 @@
package io.netty.handler.codec.http2;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.DefaultEventLoopGroup;
+import io.netty.handler.codec.http2.Http2Connection.Endpoint;
+import io.netty.handler.codec.http2.Http2Stream.State;
+import io.netty.util.concurrent.Future;
+import io.netty.util.concurrent.FutureListener;
+import io.netty.util.concurrent.Promise;
+import io.netty.util.internal.PlatformDependent;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+import org.mockito.ArgumentCaptor;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
import static org.junit.Assert.assertEquals;
@@ -35,25 +62,6 @@
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
-import io.netty.buffer.ByteBuf;
-import io.netty.buffer.Unpooled;
-import io.netty.handler.codec.http2.Http2Connection.Endpoint;
-import io.netty.handler.codec.http2.Http2Stream.State;
-import io.netty.util.internal.PlatformDependent;
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.ExpectedException;
-import org.mockito.ArgumentCaptor;
-import org.mockito.Mock;
-import org.mockito.MockitoAnnotations;
-import org.mockito.invocation.InvocationOnMock;
-import org.mockito.stubbing.Answer;
-
-import java.util.Arrays;
-import java.util.List;
-import java.util.concurrent.atomic.AtomicReference;
-
/**
* Tests for {@link DefaultHttp2Connection}.
*/
@@ -63,6 +71,7 @@ public class DefaultHttp2ConnectionTest {
private DefaultHttp2Connection server;
private DefaultHttp2Connection client;
+ private static DefaultEventLoopGroup group;
@Mock
private Http2Connection.Listener clientListener;
@@ -70,6 +79,16 @@ public class DefaultHttp2ConnectionTest {
@Mock
private Http2Connection.Listener clientListener2;
+ @BeforeClass
+ public static void beforeClass() {
+ group = new DefaultEventLoopGroup(2);
+ }
+
+ @AfterClass
+ public static void afterClass() {
+ group.shutdownGracefully();
+ }
+
@Before
public void setup() {
MockitoAnnotations.initMocks(this);
@@ -84,6 +103,110 @@ public void getStreamWithoutStreamShouldReturnNull() {
assertNull(server.stream(100));
}
+ @Test
+ public void removeAllStreamsWithEmptyStreams() throws InterruptedException {
+ testRemoveAllStreams();
+ }
+
+ @Test
+ public void removeAllStreamsWithJustOneLocalStream() throws InterruptedException, Http2Exception {
+ client.local().createStream(3, false);
+ testRemoveAllStreams();
+ }
+
+ @Test
+ public void removeAllStreamsWithJustOneRemoveStream() throws InterruptedException, Http2Exception {
+ client.remote().createStream(2, false);
+ testRemoveAllStreams();
+ }
+
+ @Test
+ public void removeAllStreamsWithManyActiveStreams() throws InterruptedException, Http2Exception {
+ Endpoint<Http2RemoteFlowController> remote = client.remote();
+ Endpoint<Http2LocalFlowController> local = client.local();
+ for (int c = 3, s = 2; c < 5000; c += 2, s += 2) {
+ local.createStream(c, false);
+ remote.createStream(s, false);
+ }
+ testRemoveAllStreams();
+ }
+
+ @Test
+ public void removeAllStreamsWithNonActiveStreams() throws InterruptedException, Http2Exception {
+ client.local().createIdleStream(3);
+ client.remote().createIdleStream(2);
+ testRemoveAllStreams();
+ }
+
+ @Test
+ public void removeAllStreamsWithNonActiveAndActiveStreams() throws InterruptedException, Http2Exception {
+ client.local().createIdleStream(3);
+ client.remote().createIdleStream(2);
+ client.local().createStream(5, false);
+ client.remote().createStream(4, true);
+ testRemoveAllStreams();
+ }
+
+ @Test
+ public void removeAllStreamsWhileIteratingActiveStreams() throws InterruptedException, Http2Exception {
+ final Endpoint<Http2RemoteFlowController> remote = client.remote();
+ final Endpoint<Http2LocalFlowController> local = client.local();
+ for (int c = 3, s = 2; c < 5000; c += 2, s += 2) {
+ local.createStream(c, false);
+ remote.createStream(s, false);
+ }
+ final Promise<Void> promise = group.next().newPromise();
+ final CountDownLatch latch = new CountDownLatch(client.numActiveStreams());
+ client.forEachActiveStream(new Http2StreamVisitor() {
+ @Override
+ public boolean visit(Http2Stream stream) throws Http2Exception {
+ client.close(promise).addListener(new FutureListener<Void>() {
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ assertTrue(promise.isDone());
+ latch.countDown();
+ }
+ });
+ return true;
+ }
+ });
+ assertTrue(latch.await(2, TimeUnit.SECONDS));
+ }
+
+ @Test
+ public void removeAllStreamsWhileIteratingActiveStreamsAndExceptionOccurs()
+ throws InterruptedException, Http2Exception {
+ final Endpoint<Http2RemoteFlowController> remote = client.remote();
+ final Endpoint<Http2LocalFlowController> local = client.local();
+ for (int c = 3, s = 2; c < 5000; c += 2, s += 2) {
+ local.createStream(c, false);
+ remote.createStream(s, false);
+ }
+ final Promise<Void> promise = group.next().newPromise();
+ final CountDownLatch latch = new CountDownLatch(1);
+ try {
+ client.forEachActiveStream(new Http2StreamVisitor() {
+ @Override
+ public boolean visit(Http2Stream stream) throws Http2Exception {
+ // This close call is basically a noop, because the following statement will throw an exception.
+ client.close(promise);
+ // Do an invalid operation while iterating.
+ remote.createStream(3, false);
+ return true;
+ }
+ });
+ } catch (Http2Exception ignored) {
+ client.close(promise).addListener(new FutureListener<Void>() {
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ assertTrue(promise.isDone());
+ latch.countDown();
+ }
+ });
+ }
+ assertTrue(latch.await(2, TimeUnit.SECONDS));
+ }
+
@Test
public void goAwayReceivedShouldCloseStreamsGreaterThanLastStream() throws Exception {
Http2Stream stream1 = client.local().createStream(3, false);
@@ -1107,6 +1230,19 @@ public void listenerThrowShouldNotPreventOtherListenersFromBeingNotified() throw
}
}
+ private void testRemoveAllStreams() throws InterruptedException {
+ final CountDownLatch latch = new CountDownLatch(1);
+ final Promise<Void> promise = group.next().newPromise();
+ client.close(promise).addListener(new FutureListener<Void>() {
+ @Override
+ public void operationComplete(Future<Void> future) throws Exception {
+ assertTrue(promise.isDone());
+ latch.countDown();
+ }
+ });
+ assertTrue(latch.await(2, TimeUnit.SECONDS));
+ }
+
private void incrementAndGetStreamShouldRespectOverflow(Endpoint<?> endpoint, int streamId) throws Http2Exception {
assertTrue(streamId > 0);
try {
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
index c17cc4d8819..ed9d1c06ea4 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
@@ -26,6 +26,7 @@
import io.netty.channel.DefaultChannelPromise;
import io.netty.util.concurrent.EventExecutor;
import io.netty.util.concurrent.GenericFutureListener;
+import io.netty.util.concurrent.Promise;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -269,11 +270,12 @@ public void verifyChannelHandlerCanBeReusedInPipeline() throws Exception {
verify(decoder, atLeastOnce()).decodeFrame(eq(ctx), any(ByteBuf.class), Matchers.<List<Object>>any());
}
+ @SuppressWarnings("unchecked")
@Test
public void channelInactiveShouldCloseStreams() throws Exception {
handler = newHandler();
handler.channelInactive(ctx);
- verify(stream).close();
+ verify(connection).close(any(Promise.class));
}
@Test
| train | train | 2016-02-11T01:48:39 | 2016-02-04T13:58:06Z | Scottmitch | val |
netty/netty/4855_4862 | netty/netty | netty/netty/4855 | netty/netty/4862 | [
"timestamp(timedelta=44.0, similarity=0.8687496447467746)"
] | ccb08706003b4aed2a6a95d009f7fd736f723b23 | cac9134590198a9ea56fd275cd4730e5dac07ffb | [] | [
"maybe use some lower capacity as the default one ?\n",
"consider optimistically incrementing `i` here (`++i`) and decrement if found to be invalid `throw newInvalidEscapedCsvFieldException(value, i - 1);`\n"
] | 2016-02-11T04:33:14Z | [
"feature"
] | CombinedHttpHeaders unescape csv values | From https://github.com/netty/netty/issues/3435#issuecomment-74892285 (unescape was added by https://github.com/netty/netty/pull/4853) we should add unescape methods to [CombinedHttpHeaders](https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java).
> Just as #3271 added the ability to automatically combine and CSV escape values associated with the same header name; we should also add the ability to automatically unescape values returned by the accessor methods. For example CharSequence get(Charsequence name) in DefaultTextHeaders should be unescaping the values that are returned (under some condition...which may be the same condition that activates escaping).
| [
"codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java",
"common/src/main/java/io/netty/util/internal/StringUtil.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java",
"common/src/main/java/io/netty/util/internal/StringUtil.java"
] | [
"codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java",
"common/src/test/java/io/netty/util/internal/StringUtilTest.java"
] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java b/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java
index 2c639176071..79c5fda97a0 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/CombinedHttpHeaders.java
@@ -23,6 +23,7 @@
import java.util.Collection;
import java.util.Iterator;
+import java.util.List;
import java.util.Map;
import static io.netty.util.AsciiString.CASE_INSENSITIVE_HASHER;
@@ -77,6 +78,18 @@ public CombinedHttpHeadersImpl(HashingStrategy<CharSequence> nameHashingStrategy
super(nameHashingStrategy, valueConverter, nameValidator);
}
+ @Override
+ public List<CharSequence> getAll(CharSequence name) {
+ List<CharSequence> values = super.getAll(name);
+ if (values.isEmpty()) {
+ return values;
+ }
+ if (values.size() != 1) {
+ throw new IllegalStateException("CombinedHttpHeaders should only have one value");
+ }
+ return StringUtil.unescapeCsvFields(values.get(0));
+ }
+
@Override
public CombinedHttpHeadersImpl add(Headers<? extends CharSequence, ? extends CharSequence, ?> headers) {
// Override the fast-copy mechanism used by DefaultHeaders
@@ -158,6 +171,12 @@ public CombinedHttpHeadersImpl set(CharSequence name, Iterable<? extends CharSeq
return this;
}
+ @Override
+ public CombinedHttpHeadersImpl setObject(CharSequence name, Object value) {
+ super.set(name, commaSeparate(objectEscaper(), value));
+ return this;
+ }
+
@Override
public CombinedHttpHeadersImpl setObject(CharSequence name, Object... values) {
super.set(name, commaSeparate(objectEscaper(), values));
diff --git a/common/src/main/java/io/netty/util/internal/StringUtil.java b/common/src/main/java/io/netty/util/internal/StringUtil.java
index e60c734bb00..10461e348d5 100644
--- a/common/src/main/java/io/netty/util/internal/StringUtil.java
+++ b/common/src/main/java/io/netty/util/internal/StringUtil.java
@@ -16,6 +16,7 @@
package io.netty.util.internal;
import java.io.IOException;
+import java.util.ArrayList;
import java.util.Formatter;
import java.util.List;
@@ -425,6 +426,76 @@ public static CharSequence unescapeCsv(CharSequence value) {
}
/**
+ * Unescapes the specified escaped CSV fields according to
+ * <a href="https://tools.ietf.org/html/rfc4180#section-2">RFC-4180</a>.
+ *
+ * @param value A string with multiple CSV escaped fields which will be unescaped according to
+ * <a href="https://tools.ietf.org/html/rfc4180#section-2">RFC-4180</a>
+ * @return {@link List} the list of unescaped fields
+ */
+ public static List<CharSequence> unescapeCsvFields(CharSequence value) {
+ List<CharSequence> unescaped = new ArrayList<CharSequence>(2);
+ StringBuilder current = InternalThreadLocalMap.get().stringBuilder();
+ boolean quoted = false;
+ int last = value.length() - 1;
+ for (int i = 0; i <= last; i++) {
+ char c = value.charAt(i);
+ if (quoted) {
+ switch (c) {
+ case DOUBLE_QUOTE:
+ if (i == last) {
+ // Add the last field and return
+ unescaped.add(current.toString());
+ return unescaped;
+ }
+ char next = value.charAt(++i);
+ if (next == DOUBLE_QUOTE) {
+ // 2 double-quotes should be unescaped to one
+ current.append(DOUBLE_QUOTE);
+ break;
+ }
+ if (next == COMMA) {
+ // This is the end of a field. Let's start to parse the next field.
+ quoted = false;
+ unescaped.add(current.toString());
+ current.setLength(0);
+ break;
+ }
+ // double-quote followed by other character is invalid
+ throw newInvalidEscapedCsvFieldException(value, i - 1);
+ default:
+ current.append(c);
+ }
+ } else {
+ switch (c) {
+ case COMMA:
+ // Start to parse the next field
+ unescaped.add(current.toString());
+ current.setLength(0);
+ break;
+ case DOUBLE_QUOTE:
+ if (current.length() == 0) {
+ quoted = true;
+ break;
+ }
+ // double-quote appears without being enclosed with double-quotes
+ case LINE_FEED:
+ case CARRIAGE_RETURN:
+ // special characters appears without being enclosed with double-quotes
+ throw newInvalidEscapedCsvFieldException(value, i);
+ default:
+ current.append(c);
+ }
+ }
+ }
+ if (quoted) {
+ throw newInvalidEscapedCsvFieldException(value, last);
+ }
+ unescaped.add(current.toString());
+ return unescaped;
+ }
+
+ /**s
* Validate if {@code value} is a valid csv field without double-quotes.
*
* @throws IllegalArgumentException if {@code value} needs to be encoded with double-quotes.
| diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java
index 59ae6b544d9..6ec9bfaf36a 100644
--- a/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/CombinedHttpHeadersTest.java
@@ -18,6 +18,7 @@
import io.netty.handler.codec.http.HttpHeadersTestUtils.HeaderValue;
import org.junit.Test;
+import java.util.Arrays;
import java.util.Collections;
import static io.netty.util.AsciiString.contentEquals;
@@ -101,7 +102,7 @@ public void addCharSequencesCsvWithValueContainingComma() {
final CombinedHttpHeaders headers = newCombinedHttpHeaders();
headers.add(HEADER_NAME, HeaderValue.SIX_QUOTED.subset(4));
assertTrue(contentEquals(HeaderValue.SIX_QUOTED.subsetAsCsvString(4), headers.get(HEADER_NAME)));
- assertTrue(contentEquals(HeaderValue.SIX_QUOTED.subsetAsCsvString(4), headers.getAll(HEADER_NAME).get(0)));
+ assertEquals(HeaderValue.SIX_QUOTED.subset(4), headers.getAll(HEADER_NAME));
}
@Test
@@ -109,7 +110,7 @@ public void addCharSequencesCsvWithValueContainingCommas() {
final CombinedHttpHeaders headers = newCombinedHttpHeaders();
headers.add(HEADER_NAME, HeaderValue.EIGHT.subset(6));
assertTrue(contentEquals(HeaderValue.EIGHT.subsetAsCsvString(6), headers.get(HEADER_NAME)));
- assertTrue(contentEquals(HeaderValue.EIGHT.subsetAsCsvString(6), headers.getAll(HEADER_NAME).get(0)));
+ assertEquals(HeaderValue.EIGHT.subset(6), headers.getAll(HEADER_NAME));
}
@Test (expected = NullPointerException.class)
@@ -168,7 +169,7 @@ public void addIterableCsvSingleValue() {
public void addIterableCsvEmtpy() {
final CombinedHttpHeaders headers = newCombinedHttpHeaders();
headers.add(HEADER_NAME, Collections.<CharSequence>emptyList());
- assertTrue(contentEquals("", headers.getAll(HEADER_NAME).get(0)));
+ assertEquals(Arrays.asList(""), headers.getAll(HEADER_NAME));
}
@Test
@@ -234,7 +235,7 @@ private static CombinedHttpHeaders newCombinedHttpHeaders() {
private static void assertCsvValues(final CombinedHttpHeaders headers, final HeaderValue headerValue) {
assertTrue(contentEquals(headerValue.asCsv(), headers.get(HEADER_NAME)));
- assertTrue(contentEquals(headerValue.asCsv(), headers.getAll(HEADER_NAME).get(0)));
+ assertEquals(headerValue.asList(), headers.getAll(HEADER_NAME));
}
private static void assertCsvValue(final CombinedHttpHeaders headers, final HeaderValue headerValue) {
@@ -253,4 +254,21 @@ private static void addObjectValues(final CombinedHttpHeaders headers, HeaderVal
headers.add(HEADER_NAME, v.toString());
}
}
+
+ @Test
+ public void testGetAll() {
+ final CombinedHttpHeaders headers = newCombinedHttpHeaders();
+ headers.set(HEADER_NAME, Arrays.asList("a", "b", "c"));
+ assertEquals(Arrays.asList("a", "b", "c"), headers.getAll(HEADER_NAME));
+ headers.set(HEADER_NAME, Arrays.asList("a,", "b,", "c,"));
+ assertEquals(Arrays.asList("a,", "b,", "c,"), headers.getAll(HEADER_NAME));
+ headers.set(HEADER_NAME, Arrays.asList("a\"", "b\"", "c\""));
+ assertEquals(Arrays.asList("a\"", "b\"", "c\""), headers.getAll(HEADER_NAME));
+ headers.set(HEADER_NAME, Arrays.asList("\"a\"", "\"b\"", "\"c\""));
+ assertEquals(Arrays.asList("a", "b", "c"), headers.getAll(HEADER_NAME));
+ headers.set(HEADER_NAME, "a,b,c");
+ assertEquals(Arrays.asList("a,b,c"), headers.getAll(HEADER_NAME));
+ headers.set(HEADER_NAME, "\"a,b,c\"");
+ assertEquals(Arrays.asList("a,b,c"), headers.getAll(HEADER_NAME));
+ }
}
diff --git a/common/src/test/java/io/netty/util/internal/StringUtilTest.java b/common/src/test/java/io/netty/util/internal/StringUtilTest.java
index 3084094f00a..1ee27e59ffa 100644
--- a/common/src/test/java/io/netty/util/internal/StringUtilTest.java
+++ b/common/src/test/java/io/netty/util/internal/StringUtilTest.java
@@ -15,6 +15,7 @@
*/
package io.netty.util.internal;
+import java.util.Arrays;
import org.junit.Test;
import static io.netty.util.internal.StringUtil.*;
@@ -376,6 +377,47 @@ private void assertEscapeCsvAndUnEscapeCsv(String value) {
assertEquals(value, unescapeCsv(StringUtil.escapeCsv(value)));
}
+ @Test
+ public void testUnescapeCsvFields() {
+ assertEquals(Arrays.asList(""), unescapeCsvFields(""));
+ assertEquals(Arrays.asList("", ""), unescapeCsvFields(","));
+ assertEquals(Arrays.asList("a", ""), unescapeCsvFields("a,"));
+ assertEquals(Arrays.asList("", "a"), unescapeCsvFields(",a"));
+ assertEquals(Arrays.asList("\""), unescapeCsvFields("\"\"\"\""));
+ assertEquals(Arrays.asList("\"", "\""), unescapeCsvFields("\"\"\"\",\"\"\"\""));
+ assertEquals(Arrays.asList("netty"), unescapeCsvFields("netty"));
+ assertEquals(Arrays.asList("hello", "netty"), unescapeCsvFields("hello,netty"));
+ assertEquals(Arrays.asList("hello,netty"), unescapeCsvFields("\"hello,netty\""));
+ assertEquals(Arrays.asList("hello", "netty"), unescapeCsvFields("\"hello\",\"netty\""));
+ assertEquals(Arrays.asList("a\"b", "c\"d"), unescapeCsvFields("\"a\"\"b\",\"c\"\"d\""));
+ assertEquals(Arrays.asList("a\rb", "c\nd"), unescapeCsvFields("\"a\rb\",\"c\nd\""));
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvFieldsWithCRWithoutQuote() {
+ unescapeCsvFields("a,\r");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvFieldsWithLFWithoutQuote() {
+ unescapeCsvFields("a,\r");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvFieldsWithQuote() {
+ unescapeCsvFields("a,\"");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvFieldsWithQuote2() {
+ unescapeCsvFields("\",a");
+ }
+
+ @Test(expected = IllegalArgumentException.class)
+ public void unescapeCsvFieldsWithQuote3() {
+ unescapeCsvFields("a\"b,a");
+ }
+
@Test
public void testSimpleClassName() throws Exception {
testSimpleClassName(String.class);
| test | train | 2016-02-15T05:37:37 | 2016-02-08T22:39:54Z | Scottmitch | val |
netty/netty/4881_4883 | netty/netty | netty/netty/4881 | netty/netty/4883 | [
"timestamp(timedelta=5583.0, similarity=0.8777541916621916)"
] | 41d0a816912f32ff0405882cca347682549a709d | 628cb284c87767ad377d75dfe43ed237fc0c6505 | [
"@marcuslinke because we not implemented it yet ;) I will take care... Thanks for reporting\n",
"Fixed by #4883\n"
] | [
"do we need this change ?\n",
"future -> promise\n",
"do we need this ?\n",
"do we need this ?\n",
"found another way, so don't need `public` anymore, fixed\n",
"done\n",
"no, fixed\n",
"no, fixed\n"
] | 2016-02-18T16:55:31Z | [
"feature"
] | 4.1.0.CR2: Missing shutdownOutput() method for EpollDomainSocketChannel | I wonder why there is no such public method? Is there any reason for that? I'm looking for an equivalent for `((NioSocketChannel)channel).shutdownOutput();`
See: https://github.com/docker-java/docker-java/issues/471
| [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java",
"transport/src/main/java/io/netty/channel... | [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java",
"transport/src/main/java/io/netty/channel... | [] | diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
index 612afce5efb..78aa2ef8009 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
@@ -29,6 +29,7 @@
import io.netty.channel.DefaultFileRegion;
import io.netty.channel.EventLoop;
import io.netty.channel.RecvByteBufAllocator;
+import io.netty.channel.socket.DuplexChannel;
import io.netty.channel.unix.FileDescriptor;
import io.netty.channel.unix.Socket;
import io.netty.util.internal.EmptyArrays;
@@ -44,13 +45,14 @@
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.util.Queue;
+import java.util.concurrent.Executor;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;
import static io.netty.channel.unix.FileDescriptor.pipe;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
-public abstract class AbstractEpollStreamChannel extends AbstractEpollChannel {
+public abstract class AbstractEpollStreamChannel extends AbstractEpollChannel implements DuplexChannel {
private static final String EXPECTED_TYPES =
" (expected: " + StringUtil.simpleClassName(ByteBuf.class) + ", " +
@@ -537,6 +539,47 @@ protected void shutdownOutput0(final ChannelPromise promise) {
}
}
+ @Override
+ public boolean isInputShutdown() {
+ return fd().isInputShutdown();
+ }
+
+ @Override
+ public boolean isOutputShutdown() {
+ return fd().isOutputShutdown();
+ }
+
+ @Override
+ public ChannelFuture shutdownOutput() {
+ return shutdownOutput(newPromise());
+ }
+
+ @Override
+ public ChannelFuture shutdownOutput(final ChannelPromise promise) {
+ Executor closeExecutor = ((EpollStreamUnsafe) unsafe()).prepareToClose();
+ if (closeExecutor != null) {
+ closeExecutor.execute(new OneTimeTask() {
+ @Override
+ public void run() {
+ shutdownOutput0(promise);
+ }
+ });
+ } else {
+ EventLoop loop = eventLoop();
+ if (loop.inEventLoop()) {
+ shutdownOutput0(promise);
+ } else {
+ loop.execute(new OneTimeTask() {
+ @Override
+ public void run() {
+ shutdownOutput0(promise);
+ }
+ });
+ }
+ }
+ return promise;
+ }
+
@Override
protected void doClose() throws Exception {
try {
@@ -610,6 +653,13 @@ private void safeClosePipe(FileDescriptor fd) {
}
class EpollStreamUnsafe extends AbstractEpollUnsafe {
+
+ // Overridden here just to be able to access this method from AbstractEpollStreamChannel
+ @Override
+ protected Executor prepareToClose() {
+ return super.prepareToClose();
+ }
+
private void handleReadException(ChannelPipeline pipeline, ByteBuf byteBuf, Throwable cause, boolean close) {
if (byteBuf != null) {
if (byteBuf.isReadable()) {
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
index 4fe6b8a9f45..d0c90ea231c 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
@@ -16,15 +16,11 @@
package io.netty.channel.epoll;
import io.netty.channel.Channel;
-import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelPromise;
-import io.netty.channel.EventLoop;
import io.netty.channel.socket.ServerSocketChannel;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.unix.FileDescriptor;
import io.netty.channel.unix.Socket;
import io.netty.util.concurrent.GlobalEventExecutor;
-import io.netty.util.internal.OneTimeTask;
import io.netty.util.internal.PlatformDependent;
import java.net.InetAddress;
@@ -143,47 +139,6 @@ public EpollSocketChannelConfig config() {
return config;
}
- @Override
- public boolean isInputShutdown() {
- return fd().isInputShutdown();
- }
-
- @Override
- public boolean isOutputShutdown() {
- return fd().isOutputShutdown();
- }
-
- @Override
- public ChannelFuture shutdownOutput() {
- return shutdownOutput(newPromise());
- }
-
- @Override
- public ChannelFuture shutdownOutput(final ChannelPromise promise) {
- Executor closeExecutor = ((EpollSocketChannelUnsafe) unsafe()).prepareToClose();
- if (closeExecutor != null) {
- closeExecutor.execute(new OneTimeTask() {
- @Override
- public void run() {
- shutdownOutput0(promise);
- }
- });
- } else {
- EventLoop loop = eventLoop();
- if (loop.inEventLoop()) {
- shutdownOutput0(promise);
- } else {
- loop.execute(new OneTimeTask() {
- @Override
- public void run() {
- shutdownOutput0(promise);
- }
- });
- }
- }
- return promise;
- }
-
@Override
public ServerSocketChannel parent() {
return (ServerSocketChannel) super.parent();
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java
index f8163d91342..9a6c08b584b 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java
@@ -15,11 +15,13 @@
*/
package io.netty.channel.unix;
+import io.netty.channel.socket.DuplexChannel;
+
/**
* A {@link UnixChannel} that supports communication via
* <a href="http://en.wikipedia.org/wiki/Unix_domain_socket">Unix Domain Socket</a>.
*/
-public interface DomainSocketChannel extends UnixChannel {
+public interface DomainSocketChannel extends UnixChannel, DuplexChannel {
@Override
DomainSocketAddress remoteAddress();
diff --git a/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java
new file mode 100644
index 00000000000..d34ec36bff1
--- /dev/null
+++ b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java
@@ -0,0 +1,51 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.socket;
+
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelPromise;
+
+import java.net.Socket;
+
+/**
+ * A duplex {@link Channel} that has two sides that can be shutdown independently.
+ */
+public interface DuplexChannel extends Channel {
+ /**
+ * Returns {@code true} if and only if the remote peer shut down its output so that no more
+ * data is received from this channel. Note that the semantic of this method is different from
+ * that of {@link Socket#shutdownInput()} and {@link Socket#isInputShutdown()}.
+ */
+ boolean isInputShutdown();
+
+ /**
+ * @see Socket#isOutputShutdown()
+ */
+ boolean isOutputShutdown();
+
+ /**
+ * @see Socket#shutdownOutput()
+ */
+ ChannelFuture shutdownOutput();
+
+ /**
+ * @see Socket#shutdownOutput()
+ *
+ * Will notify the given {@link ChannelPromise}
+ */
+ ChannelFuture shutdownOutput(ChannelPromise promise);
+}
diff --git a/transport/src/main/java/io/netty/channel/socket/SocketChannel.java b/transport/src/main/java/io/netty/channel/socket/SocketChannel.java
index ba0962c9fd7..22562c7baba 100644
--- a/transport/src/main/java/io/netty/channel/socket/SocketChannel.java
+++ b/transport/src/main/java/io/netty/channel/socket/SocketChannel.java
@@ -16,16 +16,13 @@
package io.netty.channel.socket;
import io.netty.channel.Channel;
-import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelPromise;
import java.net.InetSocketAddress;
-import java.net.Socket;
/**
* A TCP/IP socket {@link Channel}.
*/
-public interface SocketChannel extends Channel {
+public interface SocketChannel extends DuplexChannel {
@Override
ServerSocketChannel parent();
@@ -35,28 +32,4 @@ public interface SocketChannel extends Channel {
InetSocketAddress localAddress();
@Override
InetSocketAddress remoteAddress();
-
- /**
- * Returns {@code true} if and only if the remote peer shut down its output so that no more
- * data is received from this channel. Note that the semantic of this method is different from
- * that of {@link Socket#shutdownInput()} and {@link Socket#isInputShutdown()}.
- */
- boolean isInputShutdown();
-
- /**
- * @see Socket#isOutputShutdown()
- */
- boolean isOutputShutdown();
-
- /**
- * @see Socket#shutdownOutput()
- */
- ChannelFuture shutdownOutput();
-
- /**
- * @see Socket#shutdownOutput()
- *
- * Will notify the given {@link ChannelPromise}
- */
- ChannelFuture shutdownOutput(ChannelPromise future);
}
| null | train | train | 2016-02-18T04:55:52 | 2016-02-18T15:45:48Z | marcuslinke | val |
netty/netty/4882_4883 | netty/netty | netty/netty/4882 | netty/netty/4883 | [
"timestamp(timedelta=7329.0, similarity=0.9337282066886308)"
] | 41d0a816912f32ff0405882cca347682549a709d | 628cb284c87767ad377d75dfe43ed237fc0c6505 | [
"@rtimush yes!\n",
"Duplicates https://github.com/netty/netty/issues/4881\n",
"Lets close this in favor of #4881 (reported first).\n"
] | [
"do we need this change ?\n",
"future -> promise\n",
"do we need this ?\n",
"do we need this ?\n",
"found another way, so don't need `public` anymore, fixed\n",
"done\n",
"no, fixed\n",
"no, fixed\n"
] | 2016-02-18T16:55:31Z | [
"duplicate"
] | shutdownOutput for EpollDomainSocketChannel | I see a protected `shutdownOutput0` method in `AbstractEpollStreamChannel` but there is no public `shutdownOutput` in `EpollDomainSocketChannel`.
I suppose `shutdownOutput` implementation from `EpollSocketChannel` can be moved to `AbstractEpollStreamChannel`:
``` java
@Override
public ChannelFuture shutdownOutput(final ChannelPromise promise) {
Executor closeExecutor = ((EpollSocketChannelUnsafe) unsafe()).prepareToClose();
if (closeExecutor != null) {
closeExecutor.execute(new OneTimeTask() {
@Override
public void run() {
shutdownOutput0(promise);
}
});
} else {
EventLoop loop = eventLoop();
if (loop.inEventLoop()) {
shutdownOutput0(promise);
} else {
loop.execute(new OneTimeTask() {
@Override
public void run() {
shutdownOutput0(promise);
}
});
}
}
return promise;
}
```
I expect this change to help with https://github.com/docker-java/docker-java/pull/472. If you consider this to be a valid solution then I can create a pull request.
| [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java",
"transport/src/main/java/io/netty/channel... | [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java",
"transport/src/main/java/io/netty/channel... | [] | diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
index 612afce5efb..78aa2ef8009 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
@@ -29,6 +29,7 @@
import io.netty.channel.DefaultFileRegion;
import io.netty.channel.EventLoop;
import io.netty.channel.RecvByteBufAllocator;
+import io.netty.channel.socket.DuplexChannel;
import io.netty.channel.unix.FileDescriptor;
import io.netty.channel.unix.Socket;
import io.netty.util.internal.EmptyArrays;
@@ -44,13 +45,14 @@
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.util.Queue;
+import java.util.concurrent.Executor;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;
import static io.netty.channel.unix.FileDescriptor.pipe;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
-public abstract class AbstractEpollStreamChannel extends AbstractEpollChannel {
+public abstract class AbstractEpollStreamChannel extends AbstractEpollChannel implements DuplexChannel {
private static final String EXPECTED_TYPES =
" (expected: " + StringUtil.simpleClassName(ByteBuf.class) + ", " +
@@ -537,6 +539,47 @@ protected void shutdownOutput0(final ChannelPromise promise) {
}
}
+ @Override
+ public boolean isInputShutdown() {
+ return fd().isInputShutdown();
+ }
+
+ @Override
+ public boolean isOutputShutdown() {
+ return fd().isOutputShutdown();
+ }
+
+ @Override
+ public ChannelFuture shutdownOutput() {
+ return shutdownOutput(newPromise());
+ }
+
+ @Override
+ public ChannelFuture shutdownOutput(final ChannelPromise promise) {
+ Executor closeExecutor = ((EpollStreamUnsafe) unsafe()).prepareToClose();
+ if (closeExecutor != null) {
+ closeExecutor.execute(new OneTimeTask() {
+ @Override
+ public void run() {
+ shutdownOutput0(promise);
+ }
+ });
+ } else {
+ EventLoop loop = eventLoop();
+ if (loop.inEventLoop()) {
+ shutdownOutput0(promise);
+ } else {
+ loop.execute(new OneTimeTask() {
+ @Override
+ public void run() {
+ shutdownOutput0(promise);
+ }
+ });
+ }
+ }
+ return promise;
+ }
+
@Override
protected void doClose() throws Exception {
try {
@@ -610,6 +653,13 @@ private void safeClosePipe(FileDescriptor fd) {
}
class EpollStreamUnsafe extends AbstractEpollUnsafe {
+
+ // Overridden here just to be able to access this method from AbstractEpollStreamChannel
+ @Override
+ protected Executor prepareToClose() {
+ return super.prepareToClose();
+ }
+
private void handleReadException(ChannelPipeline pipeline, ByteBuf byteBuf, Throwable cause, boolean close) {
if (byteBuf != null) {
if (byteBuf.isReadable()) {
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
index 4fe6b8a9f45..d0c90ea231c 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
@@ -16,15 +16,11 @@
package io.netty.channel.epoll;
import io.netty.channel.Channel;
-import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelPromise;
-import io.netty.channel.EventLoop;
import io.netty.channel.socket.ServerSocketChannel;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.unix.FileDescriptor;
import io.netty.channel.unix.Socket;
import io.netty.util.concurrent.GlobalEventExecutor;
-import io.netty.util.internal.OneTimeTask;
import io.netty.util.internal.PlatformDependent;
import java.net.InetAddress;
@@ -143,47 +139,6 @@ public EpollSocketChannelConfig config() {
return config;
}
- @Override
- public boolean isInputShutdown() {
- return fd().isInputShutdown();
- }
-
- @Override
- public boolean isOutputShutdown() {
- return fd().isOutputShutdown();
- }
-
- @Override
- public ChannelFuture shutdownOutput() {
- return shutdownOutput(newPromise());
- }
-
- @Override
- public ChannelFuture shutdownOutput(final ChannelPromise promise) {
- Executor closeExecutor = ((EpollSocketChannelUnsafe) unsafe()).prepareToClose();
- if (closeExecutor != null) {
- closeExecutor.execute(new OneTimeTask() {
- @Override
- public void run() {
- shutdownOutput0(promise);
- }
- });
- } else {
- EventLoop loop = eventLoop();
- if (loop.inEventLoop()) {
- shutdownOutput0(promise);
- } else {
- loop.execute(new OneTimeTask() {
- @Override
- public void run() {
- shutdownOutput0(promise);
- }
- });
- }
- }
- return promise;
- }
-
@Override
public ServerSocketChannel parent() {
return (ServerSocketChannel) super.parent();
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java
index f8163d91342..9a6c08b584b 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/unix/DomainSocketChannel.java
@@ -15,11 +15,13 @@
*/
package io.netty.channel.unix;
+import io.netty.channel.socket.DuplexChannel;
+
/**
* A {@link UnixChannel} that supports communication via
* <a href="http://en.wikipedia.org/wiki/Unix_domain_socket">Unix Domain Socket</a>.
*/
-public interface DomainSocketChannel extends UnixChannel {
+public interface DomainSocketChannel extends UnixChannel, DuplexChannel {
@Override
DomainSocketAddress remoteAddress();
diff --git a/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java
new file mode 100644
index 00000000000..d34ec36bff1
--- /dev/null
+++ b/transport/src/main/java/io/netty/channel/socket/DuplexChannel.java
@@ -0,0 +1,51 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.socket;
+
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelPromise;
+
+import java.net.Socket;
+
+/**
+ * A duplex {@link Channel} that has two sides that can be shutdown independently.
+ */
+public interface DuplexChannel extends Channel {
+ /**
+ * Returns {@code true} if and only if the remote peer shut down its output so that no more
+ * data is received from this channel. Note that the semantic of this method is different from
+ * that of {@link Socket#shutdownInput()} and {@link Socket#isInputShutdown()}.
+ */
+ boolean isInputShutdown();
+
+ /**
+ * @see Socket#isOutputShutdown()
+ */
+ boolean isOutputShutdown();
+
+ /**
+ * @see Socket#shutdownOutput()
+ */
+ ChannelFuture shutdownOutput();
+
+ /**
+ * @see Socket#shutdownOutput()
+ *
+ * Will notify the given {@link ChannelPromise}
+ */
+ ChannelFuture shutdownOutput(ChannelPromise promise);
+}
diff --git a/transport/src/main/java/io/netty/channel/socket/SocketChannel.java b/transport/src/main/java/io/netty/channel/socket/SocketChannel.java
index ba0962c9fd7..22562c7baba 100644
--- a/transport/src/main/java/io/netty/channel/socket/SocketChannel.java
+++ b/transport/src/main/java/io/netty/channel/socket/SocketChannel.java
@@ -16,16 +16,13 @@
package io.netty.channel.socket;
import io.netty.channel.Channel;
-import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelPromise;
import java.net.InetSocketAddress;
-import java.net.Socket;
/**
* A TCP/IP socket {@link Channel}.
*/
-public interface SocketChannel extends Channel {
+public interface SocketChannel extends DuplexChannel {
@Override
ServerSocketChannel parent();
@@ -35,28 +32,4 @@ public interface SocketChannel extends Channel {
InetSocketAddress localAddress();
@Override
InetSocketAddress remoteAddress();
-
- /**
- * Returns {@code true} if and only if the remote peer shut down its output so that no more
- * data is received from this channel. Note that the semantic of this method is different from
- * that of {@link Socket#shutdownInput()} and {@link Socket#isInputShutdown()}.
- */
- boolean isInputShutdown();
-
- /**
- * @see Socket#isOutputShutdown()
- */
- boolean isOutputShutdown();
-
- /**
- * @see Socket#shutdownOutput()
- */
- ChannelFuture shutdownOutput();
-
- /**
- * @see Socket#shutdownOutput()
- *
- * Will notify the given {@link ChannelPromise}
- */
- ChannelFuture shutdownOutput(ChannelPromise future);
}
| null | train | train | 2016-02-18T04:55:52 | 2016-02-18T15:54:25Z | rtimush | val |
netty/netty/4892_4896 | netty/netty | netty/netty/4892 | netty/netty/4896 | [
"timestamp(timedelta=6182.0, similarity=0.9210339341556103)"
] | 5ce504070f20d6c8d356b92187f791e3eed0f8f4 | f31f158fbe0f4726a1a81bfb87cb825290cadbf9 | [
"@trustin will take care.\n",
"Thanks, @normanmaurer \n",
"Thank you very much!\n",
"@trustin and @normanmaurer I was trying to use the Snappy codec and it seems to me that the limit to the input `ByteBuf` is 36KB. Beyond this, the codec throws an `IndexOutOfBoundsException`. Is there an undocumented limit to... | [] | 2016-02-25T08:11:02Z | [
"feature"
] | Consider making Snappy class public | Maybe useful for users who want to use Snappy compression without handlers? (like Base64 in Netty)
https://groups.google.com/d/topic/netty/I8-M02b2ghg/discussion
| [
"codec/src/main/java/io/netty/handler/codec/compression/Snappy.java"
] | [
"codec/src/main/java/io/netty/handler/codec/compression/Snappy.java"
] | [] | diff --git a/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java b/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java
index df4e88a0821..9a02f3a0f3f 100644
--- a/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java
+++ b/codec/src/main/java/io/netty/handler/codec/compression/Snappy.java
@@ -21,9 +21,9 @@
* Uncompresses an input {@link ByteBuf} encoded with Snappy compression into an
* output {@link ByteBuf}.
*
- * See http://code.google.com/p/snappy/source/browse/trunk/format_description.txt
+ * See <a href="http://code.google.com/p/snappy/source/browse/trunk/format_description.txt">snappy format</a>.
*/
-class Snappy {
+public final class Snappy {
private static final int MAX_HT_SIZE = 1 << 14;
private static final int MIN_COMPRESSIBLE_BYTES = 15;
@@ -598,7 +598,7 @@ private static void validateOffset(int offset, int chunkSizeSoFar) {
*
* @param data The input data to calculate the CRC32C checksum of
*/
- public static int calculateChecksum(ByteBuf data) {
+ static int calculateChecksum(ByteBuf data) {
return calculateChecksum(data, data.readerIndex(), data.readableBytes());
}
@@ -608,7 +608,7 @@ public static int calculateChecksum(ByteBuf data) {
*
* @param data The input data to calculate the CRC32C checksum of
*/
- public static int calculateChecksum(ByteBuf data, int offset, int length) {
+ static int calculateChecksum(ByteBuf data, int offset, int length) {
Crc32c crc32 = new Crc32c();
try {
if (data.hasArray()) {
| null | train | train | 2016-02-24T03:17:35 | 2016-02-23T09:57:56Z | trustin | val |
netty/netty/4909_4910 | netty/netty | netty/netty/4909 | netty/netty/4910 | [
"timestamp(timedelta=18.0, similarity=0.8591960005876418)"
] | 0bea10b0b064d7983a1422af792eeb70e511eac4 | 74de2411fad86255c149752d22479401e828c4d3 | [
"Sent a PR[1] to fix this.\n\n[1]. https://github.com/netty/netty/pull/4910\n",
"Fixed by https://github.com/netty/netty/pull/4910\n"
] | [] | 2016-02-28T08:34:29Z | [
"cleanup"
] | Build warning message due to missing paxexam version | Netty version: 4.1.0.CR4-SNAPSHOT
Context:
I have noticed below warning message[1] when building netty project.
[1].
[WARNING] Some problems were encountered while building the effective model for io.netty:netty-testsuite-osgi:jar:4.1.0.CR4-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for org.ops4j.pax.exam:maven-paxexam-plugin is missing. @ io.netty:netty-testsuite-osgi:[unknown-version], /home/chandana/Documents/branches/public/git/netty/testsuite-osgi/pom.xml, line 220, column 15
[WARNING]
Steps to reproduce:
1. Try to build netty project
2. warning message will be logged in the stacktrace.
$ java -version
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
Operating system: Ubuntu Linux 13.04 64-bit
$ uname -a
Linux chandana-Latitude-E6540 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
| [
"pom.xml"
] | [
"pom.xml"
] | [] | diff --git a/pom.xml b/pom.xml
index b38fb3b30f3..dd5aaf4a6bd 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1009,6 +1009,11 @@
<artifactId>maven-resources-plugin</artifactId>
<version>2.6</version>
</plugin>
+ <plugin>
+ <groupId>org.ops4j.pax.exam</groupId>
+ <artifactId>maven-paxexam-plugin</artifactId>
+ <version>1.2.4</version>
+ </plugin>
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<version>2.5</version>
| null | train | train | 2016-02-26T12:15:10 | 2016-02-28T08:25:12Z | cnapagoda | val |
netty/netty/3172_4931 | netty/netty | netty/netty/3172 | netty/netty/4931 | [
"timestamp(timedelta=16.0, similarity=0.9260823347012344)"
] | 0d3eda38e15582d1dac05920787f1c9f3a9f781e | ae0f43ea33fb4eb27d3a6b3b8ca0b51c436921c1 | [
"Fixed by https://github.com/netty/netty/pull/4931\n"
] | [] | 2016-03-04T07:08:09Z | [
"cleanup"
] | Print the full thread dump on test timeout | Otherwise it is sometimes difficult to analyze the test timeout reported by our CI server - e.g. #3138
We could write a JUnit `RunListener` that prints the full thread dump when the test fails with timeout, and also do some additional operations like triggering GC when running a leak test.
The Maven surefire plugin allows us to specify the listener in the pom:
```
<properties>
<property>
<name>listener</name>
<value>...NettyRunListener</value>
</property>
</properties>
```
| [
"pom.xml"
] | [
"pom.xml"
] | [] | diff --git a/pom.xml b/pom.xml
index 0697eeb437a..8264f7f8d44 100644
--- a/pom.xml
+++ b/pom.xml
@@ -201,6 +201,7 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
+ <netty.build.version>22</netty.build.version>
<jboss.marshalling.version>1.3.18.GA</jboss.marshalling.version>
<jetty.alpnAgent.version>1.0.1.Final</jetty.alpnAgent.version>
<jetty.alpnAgent.path>${settings.localRepository}/kr/motd/javaagent/jetty-alpn-agent/${jetty.alpnAgent.version}/jetty-alpn-agent-${jetty.alpnAgent.version}.jar</jetty.alpnAgent.path>
@@ -420,6 +421,12 @@
<version>4.12</version>
<scope>test</scope>
</dependency>
+ <dependency>
+ <groupId>${project.groupId}</groupId>
+ <artifactId>netty-build</artifactId>
+ <version>${netty.build.version}</version>
+ <scope>test</scope>
+ </dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-library</artifactId>
@@ -532,6 +539,11 @@
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
+ <dependency>
+ <groupId>${project.groupId}</groupId>
+ <artifactId>netty-build</artifactId>
+ <scope>test</scope>
+ </dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-library</artifactId>
@@ -714,7 +726,7 @@
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-build</artifactId>
- <version>21</version>
+ <version>${netty.build.version}</version>
</dependency>
</dependencies>
</plugin>
@@ -774,6 +786,12 @@
</excludes>
<runOrder>random</runOrder>
<argLine>${argLine.common} ${argLine.alpnAgent} ${argLine.leak} ${argLine.coverage} ${argLine.noUnsafe}</argLine>
+ <properties>
+ <property>
+ <name>listener</name>
+ <value>io.netty.build.junit.TimedOutTestsListener</value>
+ </property>
+ </properties>
</configuration>
</plugin>
<!-- always produce osgi bundles -->
| null | val | train | 2016-03-03T15:18:39 | 2014-11-21T23:02:58Z | trustin | val |
netty/netty/4771_4946 | netty/netty | netty/netty/4771 | netty/netty/4946 | [
"timestamp(timedelta=9.0, similarity=0.8698793937065923)"
] | d8658989e1a9a998373886738a17ba8cc9e270ae | abbc4ec1365929c8941dbc98f6ba06f37cb11d61 | [
"@trustin PTAL ....\n",
"I can help if @serphacker shares a reproducer.\n",
"@serphacker ping\n",
"Should be fixed by #4946 \n"
] | [] | 2016-03-07T02:19:16Z | [
"defect"
] | DnsNameResolverContext infinite loop | Got stuck there :
https://github.com/netty/netty/blob/netty-4.1.0.Beta8/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java#L280
I think I did receive a response to another query but with the same id.
```
"dns-3-1" #12 prio=5 os_prio=0 tid=0x00007f87c442d800 nid=0x1401 runnable [0x00007f87a9ca0000]
java.lang.Thread.State: RUNNABLE
at io.netty.resolver.dns.DnsNameResolverContext.onResponseCNAME(DnsNameResolverContext.java:288)
at io.netty.resolver.dns.DnsNameResolverContext.onResponseAorAAAA(DnsNameResolverContext.java:264)
at io.netty.resolver.dns.DnsNameResolverContext.onResponse(DnsNameResolverContext.java:164)
at io.netty.resolver.dns.DnsNameResolverContext$2.operationComplete(DnsNameResolverContext.java:142)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:397)
at io.netty.resolver.dns.DnsQueryContext.setSuccess(DnsQueryContext.java:181)
at io.netty.resolver.dns.DnsQueryContext.finish(DnsQueryContext.java:164)
at io.netty.resolver.dns.DnsNameResolver$DnsResponseHandler.channelRead(DnsNameResolver.java:944)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:950)
at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:94)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
```
| [
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java"
] | [
"resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java"
] | [
"resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java"
] | diff --git a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java
index 15add823202..8d42d041bf2 100644
--- a/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java
+++ b/resolver-dns/src/main/java/io/netty/resolver/dns/DnsNameResolverContext.java
@@ -283,8 +283,10 @@ private void onResponseCNAME(
final String name = question.name().toLowerCase(Locale.US);
String resolved = name;
boolean found = false;
- for (;;) {
- String next = cnames.get(resolved);
+ while (!cnames.isEmpty()) { // Do not attempt to call Map.remove() when the Map is empty
+ // because it can be Collections.emptyMap()
+ // whose remove() throws a UnsupportedOperationException.
+ final String next = cnames.remove(resolved);
if (next != null) {
found = true;
resolved = next;
| diff --git a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java
index 9218a0ce688..aef6fa76090 100644
--- a/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java
+++ b/resolver-dns/src/test/java/io/netty/resolver/dns/DnsNameResolverTest.java
@@ -265,7 +265,7 @@ public class DnsNameResolverTest {
private static final TestDnsServer dnsServer = new TestDnsServer();
private static final EventLoopGroup group = new NioEventLoopGroup(1);
- private DnsNameResolverBuilder newResolver() {
+ private static DnsNameResolverBuilder newResolver() {
return new DnsNameResolverBuilder(group.next())
.channelType(NioDatagramChannel.class)
.nameServerAddresses(DnsServerAddresses.singleton(dnsServer.localAddress()))
@@ -273,7 +273,7 @@ private DnsNameResolverBuilder newResolver() {
.optResourceEnabled(false);
}
- private DnsNameResolverBuilder newResolver(InternetProtocolFamily... resolvedAddressTypes) {
+ private static DnsNameResolverBuilder newResolver(InternetProtocolFamily... resolvedAddressTypes) {
return newResolver()
.resolvedAddressTypes(resolvedAddressTypes);
}
@@ -349,7 +349,7 @@ public void testResolveAAAA() throws Exception {
}
}
- private Map<String, InetAddress> testResolve0(DnsNameResolver resolver, Set<String> excludedDomains)
+ private static Map<String, InetAddress> testResolve0(DnsNameResolver resolver, Set<String> excludedDomains)
throws InterruptedException {
assertThat(resolver.isRecursionDesired(), is(true));
@@ -482,7 +482,7 @@ public void run() {
}
}
- private UnknownHostException resolveNonExistentDomain(DnsNameResolver resolver) {
+ private static UnknownHostException resolveNonExistentDomain(DnsNameResolver resolver) {
try {
resolver.resolve("non-existent.netty.io").sync();
fail();
@@ -505,8 +505,7 @@ public void testResolveIp() {
}
}
- private void resolve(DnsNameResolver resolver, Map<String, Future<InetAddress>> futures, String hostname) {
-
+ private static void resolve(DnsNameResolver resolver, Map<String, Future<InetAddress>> futures, String hostname) {
futures.put(hostname, resolver.resolve(hostname));
}
@@ -580,7 +579,7 @@ public void encode(IoSession session, Object message, ProtocolEncoderOutput out)
// This is a hack to allow to also test for AAAA resolution as DnsMessageEncoder
// does not support it and it is hard to extend, because the interesting methods
// are private...
- // In case of RecordType.AAAA we need to encode the RecordType by ourself.
+ // In case of RecordType.AAAA we need to encode the RecordType by ourselves.
if (record.getRecordType() == RecordType.AAAA) {
try {
recordEncoder.put(buf, record);
@@ -639,10 +638,10 @@ private static String nextDomain() {
}
private static String nextIp() {
- return ippart() + "." + ippart() + '.' + ippart() + '.' + ippart();
+ return ipPart() + "." + ipPart() + '.' + ipPart() + '.' + ipPart();
}
- private static int ippart() {
+ private static int ipPart() {
return NUMBERS[index(NUMBERS.length)];
}
@@ -672,10 +671,10 @@ public Set<ResourceRecord> getRecords(QuestionRecord questionRecord) {
} while (ThreadLocalRandom.current().nextBoolean());
break;
case MX:
- int prioritity = 0;
+ int priority = 0;
do {
rm.put(DnsAttribute.DOMAIN_NAME, nextDomain());
- rm.put(DnsAttribute.MX_PREFERENCE, String.valueOf(++prioritity));
+ rm.put(DnsAttribute.MX_PREFERENCE, String.valueOf(++priority));
} while (ThreadLocalRandom.current().nextBoolean());
break;
default:
| train | train | 2016-03-06T17:47:38 | 2016-01-27T08:50:33Z | noguespi | val |
netty/netty/4958_4963 | netty/netty | netty/netty/4958 | netty/netty/4963 | [
"timestamp(timedelta=91964.0, similarity=0.9663956059940177)"
] | bfbef036a8c1121083b485a98b9cb04a84e7dfea | 2565cf7a93daaae9a0aae998567f6289d8b1a85f | [
"@rkapsi I think we could do this. That said we would still need a finalizer for the `OpenSslEngine` itself as we never know what the user will do with it (or will use it at all after creating).\n",
"@normanmaurer Correct. It's meant to be an optimization for folks who don't want to wait for the GC. The finalizer... | [] | 2016-03-11T16:30:24Z | [] | Let OpenSslContext implement ReferenceCounted? | Hello,
would it make sense to let the `OpenSslContext` class implement the `ReferenceCounted` interface?
We have a dynamic and unbound set of OpenSslContext that are concurrently in use and it would be great if we can release the native resources in a timely manner. We're currently using a wrapper object that extends SslContext, implements ReferenceCounted and gets passed into the SniHandler.
Something along the lines of this pseudo code. I'd be happy to open a PR.
``` java
public class OpenSslContext extends SslContext implements ReferenceCounted {
private final ReferenceCounted refCnt = new AbstractReferenceCounted() {
@Override
public ReferenceCounted touch(Object hint) {
OpenSslContext.this.hint = hint;
return this;
}
@Override
protected void deallocate() {
destroy();
}
};
private Object hint = null;
@Override
public int refCnt() {
return refCnt.refCnt();
}
@Override
public OpenSslContext retain() {
refCnt.retain();
return this;
}
@Override
public OpenSslContext retain(int increment) {
refCnt.retain(increment);
return this;
}
@Override
public OpenSslContext touch() {
refCnt.touch();
return this;
}
@Override
public OpenSslContext touch(Object hint) {
refCnt.touch(hint);
return this;
}
@Override
public boolean release() {
return refCnt.release();
}
@Override
public boolean release(int decrement) {
return refCnt.release(decrement);
}
protected final void destroy() {
synchronized (OpenSslContext.class) {
if (ctx != 0) {
SSLContext.free(ctx);
ctx = 0;
}
// Guard against multiple destroyPools() calls triggered by construction exception and finalize() later
if (aprPool != 0) {
Pool.destroy(aprPool);
aprPool = 0;
}
}
}
}
```
| [
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
index 1dceb7885e8..810c22e5bad 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
@@ -19,7 +19,9 @@
import io.netty.buffer.ByteBufAllocator;
import io.netty.buffer.Unpooled;
import io.netty.handler.codec.base64.Base64;
+import io.netty.util.AbstractReferenceCounted;
import io.netty.util.CharsetUtil;
+import io.netty.util.ReferenceCounted;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.SystemPropertyUtil;
import io.netty.util.internal.logging.InternalLogger;
@@ -51,7 +53,7 @@
import static io.netty.handler.ssl.ApplicationProtocolConfig.SelectorFailureBehavior;
import static io.netty.handler.ssl.ApplicationProtocolConfig.SelectedListenerFailureBehavior;
-public abstract class OpenSslContext extends SslContext {
+public abstract class OpenSslContext extends SslContext implements ReferenceCounted {
private static final byte[] BEGIN_CERT = "-----BEGIN CERTIFICATE-----\n".getBytes(CharsetUtil.US_ASCII);
private static final byte[] END_CERT = "\n-----END CERTIFICATE-----\n".getBytes(CharsetUtil.US_ASCII);
private static final byte[] BEGIN_PRIVATE_KEY = "-----BEGIN PRIVATE KEY-----\n".getBytes(CharsetUtil.US_ASCII);
@@ -73,6 +75,28 @@ public abstract class OpenSslContext extends SslContext {
// TODO: Maybe make configurable ?
protected static final int VERIFY_DEPTH = 10;
+ /**
+ * [4958]: This reference counter provides an optimization that allows the
+ * user to efficiently share a context and also eagerly release its native
+ * resources once it's no longer in use instead of relaying on the Garbage
+ * Collector and object finalization.
+ */
+ private final ReferenceCounted refCnt = new AbstractReferenceCounted() {
+ @SuppressWarnings("unused")
+ private Object hint;
+
+ @Override
+ public ReferenceCounted touch(Object hint) {
+ this.hint = hint;
+ return this;
+ }
+
+ @Override
+ protected void deallocate() {
+ destroy();
+ }
+ };
+
/** The OpenSSL SSL_CTX object */
protected volatile long ctx;
long aprPool;
@@ -387,6 +411,45 @@ protected final void destroy() {
}
}
+ @Override
+ public int refCnt() {
+ return refCnt.refCnt();
+ }
+
+ @Override
+ public OpenSslContext retain() {
+ refCnt.retain();
+ return this;
+ }
+
+ @Override
+ public OpenSslContext retain(int increment) {
+ refCnt.retain(increment);
+ return this;
+ }
+
+ @Override
+ public OpenSslContext touch() {
+ refCnt.touch();
+ return this;
+ }
+
+ @Override
+ public OpenSslContext touch(Object hint) {
+ refCnt.touch(hint);
+ return this;
+ }
+
+ @Override
+ public boolean release() {
+ return refCnt.release();
+ }
+
+ @Override
+ public boolean release(int decrement) {
+ return refCnt.release(decrement);
+ }
+
protected static X509Certificate[] certificates(byte[][] chain) {
X509Certificate[] peerCerts = new X509Certificate[chain.length];
for (int i = 0; i < peerCerts.length; i++) {
| null | test | train | 2016-03-11T16:42:30 | 2016-03-10T14:20:12Z | rkapsi | val |
netty/netty/4829_4974 | netty/netty | netty/netty/4829 | netty/netty/4974 | [
"timestamp(timedelta=22.0, similarity=0.8910680903060451)"
] | 404666d247504e1bbffbeb87445628735b9e2342 | bd3e9ebddb6d250d69d79fa17081266034b7606a | [
"Will check...\n",
"yep that's a possible race, but it doesn't really happen in practice and I think if it does nobody will notice or care :).\n\nIt may happen if two threads have the same arena's assigned to them, and those two threads need to concurrently allocate from two different tiny size classes and there ... | [] | 2016-03-13T22:20:47Z | [
"defect"
] | Race in PoolArena.allocate | It appears there might be a race in `io.netty.buffer.PoolArena.allocate`, where `allocationsTiny` is incremented. While the value is incremented under a lock, it also appears to be mutated elsewhere in the file without a lock. A race detector suggested the two places where the class accesses this field in an unsynchronized way:
```
Read of size 8 at 0x7f2f7be16108 by thread T53 (mutexes: write M95278):
#0 io.netty.buffer.PoolArena.allocate(Lio/netty/buffer/PoolThreadCache;Lio/netty/buffer/PooledByteBuf;I)V (PoolArena.java:198)
#1 io.netty.buffer.PoolArena.reallocate(Lio/netty/buffer/PooledByteBuf;IZ)V (PoolArena.java:359)
#2 io.netty.buffer.PooledByteBuf.capacity(I)Lio/netty/buffer/ByteBuf; (PooledByteBuf.java:120)
#3 io.netty.buffer.AbstractByteBuf.ensureWritable0(I)V (AbstractByteBuf.java:269)
#4 io.netty.buffer.AbstractByteBuf.ensureWritable(I)Lio/netty/buffer/ByteBuf; (AbstractByteBuf.java:250)
```
and
```
#0 io.netty.buffer.PoolArena.allocate(Lio/netty/buffer/PoolThreadCache;Lio/netty/buffer/PooledByteBuf;I)V (PoolArena.java:198)
#1 io.netty.buffer.PoolArena.allocate(Lio/netty/buffer/PoolThreadCache;II)Lio/netty/buffer/PooledByteBuf; (PoolArena.java:133)
#2 io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(II)Lio/netty/buffer/ByteBuf; (PooledByteBufAllocator.java:262)
#3 io.netty.buffer.AbstractByteBufAllocator.directBuffer(II)Lio/netty/buffer/ByteBuf; (AbstractByteBufAllocator.java:157)
#4 io.netty.buffer.AbstractByteBufAllocator.directBuffer(I)Lio/netty/buffer/ByteBuf; (AbstractByteBufAllocator.java:148)
```
cc: @nmittler & @normanmaurer
| [
"buffer/src/main/java/io/netty/buffer/PoolArena.java"
] | [
"buffer/src/main/java/io/netty/buffer/PoolArena.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/PoolArena.java b/buffer/src/main/java/io/netty/buffer/PoolArena.java
index 4cb6af2b6a7..33445241b7b 100644
--- a/buffer/src/main/java/io/netty/buffer/PoolArena.java
+++ b/buffer/src/main/java/io/netty/buffer/PoolArena.java
@@ -57,8 +57,8 @@ enum SizeClass {
private final List<PoolChunkListMetric> chunkListMetrics;
// Metrics for allocations and deallocations
- private long allocationsTiny;
- private long allocationsSmall;
+ private LongCounter allocationsTiny = PlatformDependent.newLongCounter();
+ private LongCounter allocationsSmall = PlatformDependent.newLongCounter();
private long allocationsNormal;
// We need to use the LongCounter here as this is not guarded via synchronized block.
private final LongCounter allocationsHuge = PlatformDependent.newLongCounter();
@@ -195,9 +195,9 @@ private void allocate(PoolThreadCache cache, PooledByteBuf<T> buf, final int req
s.chunk.initBufWithSubpage(buf, handle, reqCapacity);
if (tiny) {
- ++allocationsTiny;
+ allocationsTiny.increment();
} else {
- ++allocationsSmall;
+ allocationsSmall.increment();
}
return;
}
@@ -432,17 +432,17 @@ private static List<PoolSubpageMetric> subPageMetricList(PoolSubpage<?>[] pages)
@Override
public long numAllocations() {
- return allocationsTiny + allocationsSmall + allocationsNormal + allocationsHuge.value();
+ return allocationsTiny.value() + allocationsSmall.value() + allocationsNormal + allocationsHuge.value();
}
@Override
public long numTinyAllocations() {
- return allocationsTiny;
+ return allocationsTiny.value();
}
@Override
public long numSmallAllocations() {
- return allocationsSmall;
+ return allocationsSmall.value();
}
@Override
| null | test | train | 2016-03-11T23:24:45 | 2016-02-03T19:07:54Z | carl-mastrangelo | val |
netty/netty/4936_4977 | netty/netty | netty/netty/4936 | netty/netty/4977 | [
"timestamp(timedelta=14.0, similarity=0.9426395707392698)"
] | 35771dd1cdd20e291a4a1a15ef04da57329c41db | e6b1951ee42738be3894be5da1bd72dd8f93b520 | [
"@ejona86 - Good find ... seems like a bug. You went through all the work to find it ... do you want to submit the patch and get credit for the fix too?\n\nhttps://docs.oracle.com/javase/7/docs/api/java/security/AccessController.html#doPrivileged(java.security.PrivilegedAction)\n\n> If the action's run method throw... | [] | 2016-03-14T08:21:26Z | [
"defect"
] | NetUtil can prevent using Netty due to SecurityManager denial | @carl-mastrangelo and I recently encountered a regression when upgrading to 4.1.0-CR3 due to a51e2c87, with the same error as #3680 ("Unable to create Channel from class" caused by a `NoClassDefFoundError`). The exception was surprisingly unhelpful; we don't know why Java failed to include the additional cause.
After digging in deeper, we found the problem was due to a `SecurityException` being thrown from a File.exists call in NetUtil:
https://github.com/netty/netty/blob/4.1/common/src/main/java/io/netty/util/NetUtil.java#L248
It seems like the previous fix swapped to `doPrivileged()`, which increases the cases that the code will work without an exception, but `SecurityManager`s are still permitted to throw `SecurityException`. Since the call is intended to be optional, if the `SecurityManager` denies the call then the code should probably catch the exception and treat it as if the file does not exist.
A custom `SecurityManager` was being used, but from my reading of `AccessController` and `doPrivileged()` the `SecurityManager` is behaving correctly:
> If that domain does not have the specified permission, an exception is thrown, as usual.
Once we got to this point we worked around the problem by whitelisting `/proc/sys/net/core/somaxconn` in the `SecurityManager`. So there's not an active need for a fix, but others may appreciate one.
The backtrace we saw:
```
Caused by: io.netty.channel.ChannelException: Unable to create Channel from class class io.netty.channel.socket.nio.NioSocketChannel
at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:40)
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:316)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:160)
at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:142)
at io.grpc.netty.NettyClientTransport.start(NettyClientTransport.java:131)
at io.grpc.internal.TransportSet$1.run(TransportSet.java:200)
at io.grpc.internal.TransportSet.scheduleConnection(TransportSet.java:241)
at io.grpc.internal.TransportSet.obtainActiveTransport(TransportSet.java:166)
at io.grpc.internal.ManagedChannelImpl$3.getTransport(ManagedChannelImpl.java:381)
at io.grpc.SimpleLoadBalancerFactory$SimpleLoadBalancer.pickTransport(SimpleLoadBalancerFactory.java:97)
at io.grpc.internal.ManagedChannelImpl$1.get(ManagedChannelImpl.java:135)
at io.grpc.internal.ClientCallImpl.start(ClientCallImpl.java:206)
at io.grpc.auth.ClientAuthInterceptor$1.checkedStart(ClientAuthInterceptor.java:101)
at io.grpc.ClientInterceptors$CheckedForwardingClientCall.start(ClientInterceptors.java:164)
at io.grpc.stub.ClientCalls.startCall(ClientCalls.java:245)
at io.grpc.stub.ClientCalls.asyncUnaryRequestCall(ClientCalls.java:225)
at io.grpc.stub.ClientCalls.futureUnaryCall(ClientCalls.java:186)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:132)
<snip>
... 95 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class io.netty.channel.DefaultChannelId
at io.netty.channel.AbstractChannel.newId(AbstractChannel.java:109)
at io.netty.channel.AbstractChannel.<init>(AbstractChannel.java:81)
at io.netty.channel.nio.AbstractNioChannel.<init>(AbstractNioChannel.java:82)
at io.netty.channel.nio.AbstractNioByteChannel.<init>(AbstractNioByteChannel.java:52)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:97)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:87)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:80)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:73)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at java.lang.Class.newInstance(Class.java:443)
at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:38)
... 116 more
```
Debugging found `Security policy violation: ("java.io.FilePermission" "/proc/sys/net/core/somaxconn" "read")` at:
```
<snip>SecurityManager.checkRead(...)
at java.io.File.exists(File.java:814)
at io.netty.util.NetUtil$1.run(NetUtil.java:248)
at io.netty.util.NetUtil$1.run(NetUtil.java:239)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.NetUtil.<clinit>(NetUtil.java:239)
at io.netty.util.internal.MacAddressUtil.bestAvailableMac(MacAddressUtil.java:53)
at io.netty.channel.DefaultChannelId.defaultMachineId(DefaultChannelId.java:124)
at io.netty.channel.DefaultChannelId.<clinit>(DefaultChannelId.java:101)
at io.netty.channel.AbstractChannel.newId(AbstractChannel.java:109)
at io.netty.channel.AbstractChannel.<init>(AbstractChannel.java:81)
at io.netty.channel.nio.AbstractNioChannel.<init>(AbstractNioChannel.java:82)
at io.netty.channel.nio.AbstractNioByteChannel.<init>(AbstractNioByteChannel.java:52)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:97)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:87)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:80)
at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:73)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at java.lang.Class.newInstance(Class.java:443)
at io.netty.channel.ReflectiveChannelFactory.newChannel(ReflectiveChannelFactory.java:38)
```
| [
"common/src/main/java/io/netty/util/NetUtil.java"
] | [
"common/src/main/java/io/netty/util/NetUtil.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/NetUtil.java b/common/src/main/java/io/netty/util/NetUtil.java
index 5830435f7b7..fdadd53b0c7 100644
--- a/common/src/main/java/io/netty/util/NetUtil.java
+++ b/common/src/main/java/io/netty/util/NetUtil.java
@@ -245,28 +245,31 @@ public Integer run() {
// - Linux and Mac OS X: 128
int somaxconn = PlatformDependent.isWindows() ? 200 : 128;
File file = new File("/proc/sys/net/core/somaxconn");
- if (file.exists()) {
- BufferedReader in = null;
- try {
+ BufferedReader in = null;
+ try {
+ // file.exists() may throw a SecurityException if a SecurityManager is used, so execute it in the
+ // try / catch block.
+ // See https://github.com/netty/netty/issues/4936
+ if (file.exists()) {
in = new BufferedReader(new FileReader(file));
somaxconn = Integer.parseInt(in.readLine());
if (logger.isDebugEnabled()) {
logger.debug("{}: {}", file, somaxconn);
}
- } catch (Exception e) {
- logger.debug("Failed to get SOMAXCONN from: {}", file, e);
- } finally {
- if (in != null) {
- try {
- in.close();
- } catch (Exception e) {
- // Ignored.
- }
+ } else {
+ if (logger.isDebugEnabled()) {
+ logger.debug("{}: {} (non-existent)", file, somaxconn);
}
}
- } else {
- if (logger.isDebugEnabled()) {
- logger.debug("{}: {} (non-existent)", file, somaxconn);
+ } catch (Exception e) {
+ logger.debug("Failed to get SOMAXCONN from: {}", file, e);
+ } finally {
+ if (in != null) {
+ try {
+ in.close();
+ } catch (Exception e) {
+ // Ignored.
+ }
}
}
return somaxconn;
| null | train | train | 2016-03-14T08:57:46 | 2016-03-04T23:34:08Z | ejona86 | val |
netty/netty/4182_4986 | netty/netty | netty/netty/4182 | netty/netty/4986 | [
"timestamp(timedelta=9.0, similarity=0.9107598067211391)"
] | c3c1b4a6d2bc66b7ae01a1731656d4bd6dc915b1 | 8d53638864c7f5d156e34e73ea672ff503055ac0 | [
"@longkerdandy we love contributions... Maybe you could submit a PR with a fix ?\n",
"Fixed by https://github.com/netty/netty/pull/4986\n"
] | [
"Only read this flag if the version is 3.1.1\n"
] | 2016-03-16T05:30:22Z | [
"improvement",
"feature"
] | netty-codec MqttDecoder should validate CONNECT reserved flag in variable header | The MQTT 3.1.1 Protocol Specification says:
> The Server MUST validate that the reserved flag in the CONNECT Control Packet is set to zero and disconnect the Client if it is not zero.
> MqttDecoder should validate and throw a exception if it not zero.
| [
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java"
] | [
"codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java"
] | [
"codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java"
] | diff --git a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java
index 70f0a41e7ba..7103bed6416 100644
--- a/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java
+++ b/codec-mqtt/src/main/java/io/netty/handler/codec/mqtt/MqttDecoder.java
@@ -221,6 +221,15 @@ private static Result<MqttConnectVariableHeader> decodeConnectionVariableHeader(
final int willQos = (b1 & 0x18) >> 3;
final boolean willFlag = (b1 & 0x04) == 0x04;
final boolean cleanSession = (b1 & 0x02) == 0x02;
+ if (mqttVersion == MqttVersion.MQTT_3_1_1) {
+ final boolean zeroReservedFlag = (b1 & 0x01) == 0x0;
+ if (!zeroReservedFlag) {
+ // MQTT v3.1.1: The Server MUST validate that the reserved flag in the CONNECT Control Packet is
+ // set to zero and disconnect the Client if it is not zero.
+ // See http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc385349230
+ throw new DecoderException("non-zero reserved flag");
+ }
+ }
final MqttConnectVariableHeader mqttConnectVariableHeader = new MqttConnectVariableHeader(
mqttVersion.protocolName(),
| diff --git a/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java b/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java
index 705d071bfd8..5068a941390 100644
--- a/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java
+++ b/codec-mqtt/src/test/java/io/netty/handler/codec/mqtt/MqttCodecTest.java
@@ -21,6 +21,7 @@
import io.netty.buffer.UnpooledByteBufAllocator;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.DecoderException;
import io.netty.util.CharsetUtil;
import org.easymock.Mock;
import org.junit.Before;
@@ -96,6 +97,28 @@ public void testConnectMessageForMqtt311() throws Exception {
validateConnectPayload(message.payload(), decodedMessage.payload());
}
+ @Test
+ public void testConnectMessageWithNonZeroReservedFlagForMqtt311() throws Exception {
+ final MqttConnectMessage message = createConnectMessage(MqttVersion.MQTT_3_1_1);
+ ByteBuf byteBuf = MqttEncoder.doEncode(ALLOCATOR, message);
+ try {
+ // Set the reserved flag in the CONNECT Packet to 1
+ byteBuf.setByte(9, byteBuf.getByte(9) | 0x1);
+ final List<Object> out = new LinkedList<Object>();
+ mqttDecoder.decode(ctx, byteBuf, out);
+
+ assertEquals("Expected one object bout got " + out.size(), 1, out.size());
+
+ final MqttMessage decodedMessage = (MqttMessage) out.get(0);
+ assertTrue(decodedMessage.decoderResult().isFailure());
+ Throwable cause = decodedMessage.decoderResult().cause();
+ assertTrue(cause instanceof DecoderException);
+ assertEquals("non-zero reserved flag", cause.getMessage());
+ } finally {
+ byteBuf.release();
+ }
+ }
+
@Test
public void testConnAckMessage() throws Exception {
final MqttConnAckMessage message = createConnAckMessage();
| val | train | 2016-03-16T11:55:19 | 2015-09-02T07:36:13Z | longkerdandy | val |
netty/netty/4972_4989 | netty/netty | netty/netty/4972 | netty/netty/4989 | [
"timestamp(timedelta=19.0, similarity=0.9087098129957565)"
] | 83c349ffa94d3992c4ee511d3625afc0c97c12bb | 254e1006b586a3b70a66139b083beb71cfb3cf9f | [
"@ejona86 agree... let me take care of it.\n",
"Fixed by https://github.com/netty/netty/pull/4989\n"
] | [] | 2016-03-16T07:49:08Z | [
"defect"
] | Remove misleading argument from HttpServerUpgradeHandler.UpgradeCodec.upgradeTo | `upgradeTo()` is currently passed `upgradeResponse`, but it doesn't appear like it can do anything useful with it since the response has already been sent. In addition, the [`upgradeTo()` documentation](https://github.com/netty/netty/blob/43ebbc3fa065155fa67732b0cbd7c12843b0f3f7/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java#L75) is sort of misleading as it seems to imply that it should be sent and may be modified before sending.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java
index 691d37e7675..1c9f8cda850 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java
@@ -70,14 +70,9 @@ void prepareUpgradeResponse(ChannelHandlerContext ctx, FullHttpRequest upgradeRe
* adding all handlers required for the new protocol.
*
* @param ctx the context for the current handler.
- * @param upgradeRequest the request that triggered the upgrade to this protocol. The
- * upgraded protocol is responsible for sending the response.
- * @param upgradeResponse a 101 Switching Protocols response that is populated with the
- * {@link HttpHeaderNames#CONNECTION} and {@link HttpHeaderNames#UPGRADE} headers.
- * The protocol is required to send this before sending any other frames back to the client.
- * The headers may be augmented as necessary by the protocol before sending.
+ * @param upgradeRequest the request that triggered the upgrade to this protocol.
*/
- void upgradeTo(ChannelHandlerContext ctx, FullHttpRequest upgradeRequest, FullHttpResponse upgradeResponse);
+ void upgradeTo(ChannelHandlerContext ctx, FullHttpRequest upgradeRequest);
}
/**
@@ -320,7 +315,7 @@ public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// Perform the upgrade to the new protocol.
sourceCodec.upgradeFrom(ctx);
- finalUpgradeCodec.upgradeTo(ctx, request, upgradeResponse);
+ finalUpgradeCodec.upgradeTo(ctx, request);
// Notify that the upgrade has occurred. Retain the event to offset
// the release() in the finally block.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
index fa00076ad71..3382fc29385 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
@@ -98,8 +98,7 @@ public void prepareUpgradeResponse(ChannelHandlerContext ctx, FullHttpRequest up
}
@Override
- public void upgradeTo(final ChannelHandlerContext ctx, FullHttpRequest upgradeRequest,
- FullHttpResponse upgradeResponse) {
+ public void upgradeTo(final ChannelHandlerContext ctx, FullHttpRequest upgradeRequest) {
// Add the HTTP/2 connection handler to the pipeline immediately following the current handler.
ctx.pipeline().addAfter(ctx.name(), handlerName, connectionHandler);
}
| null | train | train | 2016-03-15T16:02:33 | 2016-03-13T02:36:15Z | ejona86 | val |
netty/netty/4971_4990 | netty/netty | netty/netty/4971 | netty/netty/4990 | [
"timestamp(timedelta=26.0, similarity=0.9355626188535049)"
] | ed9d6c79bca67c181b1325a5059af47d40952a01 | 02c5f00b56cdd6f5c01ca0a7fc15ad531eb6c518 | [
"Let me take care.\n",
"Fixed\n"
] | [
"not anything I changed so I would just keep it.\n",
"How about:\n\n```\nPerforms the preparatory steps required for performing a protocol update. This method returns a\nboolean value to proceed or abort the upgrade in progress. If {@code false} is returned, the\nupgrade is aborted and the {@code upgradeRequest} ... | 2016-03-16T07:50:36Z | [
"defect"
] | Change HttpServerUpgradeHandler.UpgradeCodec to allow aborting upgrade | I noticed that `Http2ServerUpgradeCodec.prepareUpgradeResponse()` [doesn't quite have the right behavior](https://github.com/netty/netty/blob/43ebbc3fa065155fa67732b0cbd7c12843b0f3f7/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java#L95), since failures in the UPGRADE protocol generally abort the upgrade instead of responding with an error. Primarily:
> A server MUST NOT upgrade the connection to HTTP/2 if this header field is not present or if more than one is present.
That said, the spec does not specify what should happen if the value of `HTTP2-Settings` is invalid (either invalid base64 or invalid SETTINGS payload).
To fix this behavior seems like it would be necessary to change the UpgradeCodec interface in codec-http. `prepareUpgradeResponse()` could return a `boolean` to indicate whether the upgrade is successful. If `false`, `HttpServerUpgradeHandler.upgrade()` would return false and the connection would continue using HTTP/1.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java
index 691d37e7675..47670bf14eb 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpServerUpgradeHandler.java
@@ -60,10 +60,15 @@ public interface UpgradeCodec {
Collection<CharSequence> requiredUpgradeHeaders();
/**
- * Adds any headers to the 101 Switching protocols response that are appropriate for this protocol.
+ * Prepares the {@code upgradeHeaders} for a protocol update based upon the contents of {@code upgradeRequest}.
+ * This method returns a boolean value to proceed or abort the upgrade in progress. If {@code false} is
+ * returned, the upgrade is aborted and the {@code upgradeRequest} will be passed through the inbound pipeline
+ * as if no upgrade was performed. If {@code true} is returned, the upgrade will proceed to the next
+ * step which invokes {@link #upgradeTo}. When returning {@code true}, you can add headers to
+ * the {@code upgradeHeaders} so that they are added to the 101 Switching protocols response.
*/
- void prepareUpgradeResponse(ChannelHandlerContext ctx, FullHttpRequest upgradeRequest,
- FullHttpResponse upgradeResponse);
+ boolean prepareUpgradeResponse(ChannelHandlerContext ctx, FullHttpRequest upgradeRequest,
+ HttpHeaders upgradeHeaders);
/**
* Performs an HTTP protocol upgrade from the source codec. This method is responsible for
@@ -103,7 +108,7 @@ public static final class UpgradeEvent implements ReferenceCounted {
private final CharSequence protocol;
private final FullHttpRequest upgradeRequest;
- private UpgradeEvent(CharSequence protocol, FullHttpRequest upgradeRequest) {
+ UpgradeEvent(CharSequence protocol, FullHttpRequest upgradeRequest) {
this.protocol = protocol;
this.upgradeRequest = upgradeRequest;
}
@@ -304,13 +309,15 @@ private boolean upgrade(final ChannelHandlerContext ctx, final FullHttpRequest r
}
}
- // Create the user event to be fired once the upgrade completes.
- final UpgradeEvent event = new UpgradeEvent(upgradeProtocol, request);
-
// Prepare and send the upgrade response. Wait for this write to complete before upgrading,
// since we need the old codec in-place to properly encode the response.
final FullHttpResponse upgradeResponse = createUpgradeResponse(upgradeProtocol);
- upgradeCodec.prepareUpgradeResponse(ctx, request, upgradeResponse);
+ if (!upgradeCodec.prepareUpgradeResponse(ctx, request, upgradeResponse.headers())) {
+ return false;
+ }
+
+ // Create the user event to be fired once the upgrade completes.
+ final UpgradeEvent event = new UpgradeEvent(upgradeProtocol, request);
final UpgradeCodec finalUpgradeCodec = upgradeCodec;
ctx.writeAndFlush(upgradeResponse).addListener(new ChannelFutureListener() {
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
index fa00076ad71..d2d36f13473 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
@@ -20,8 +20,11 @@
import io.netty.handler.codec.base64.Base64;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
+import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpServerUpgradeHandler;
import io.netty.util.CharsetUtil;
+import io.netty.util.internal.logging.InternalLogger;
+import io.netty.util.internal.logging.InternalLoggerFactory;
import java.nio.CharBuffer;
import java.util.Collection;
@@ -29,7 +32,6 @@
import java.util.List;
import static io.netty.handler.codec.base64.Base64Dialect.URL_SAFE;
-import static io.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST;
import static io.netty.handler.codec.http2.Http2CodecUtil.FRAME_HEADER_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_SETTINGS_HEADER;
import static io.netty.handler.codec.http2.Http2CodecUtil.writeFrameHeader;
@@ -41,6 +43,7 @@
*/
public class Http2ServerUpgradeCodec implements HttpServerUpgradeHandler.UpgradeCodec {
+ private static final InternalLogger logger = InternalLoggerFactory.getInstance(Http2ServerUpgradeCodec.class);
private static final List<CharSequence> REQUIRED_UPGRADE_HEADERS =
Collections.singletonList(HTTP_UPGRADE_SETTINGS_HEADER);
@@ -77,8 +80,8 @@ public Collection<CharSequence> requiredUpgradeHeaders() {
}
@Override
- public void prepareUpgradeResponse(ChannelHandlerContext ctx, FullHttpRequest upgradeRequest,
- FullHttpResponse upgradeResponse) {
+ public boolean prepareUpgradeResponse(ChannelHandlerContext ctx, FullHttpRequest upgradeRequest,
+ HttpHeaders headers) {
try {
// Decode the HTTP2-Settings header and set the settings on the handler to make
// sure everything is fine with the request.
@@ -89,11 +92,11 @@ public void prepareUpgradeResponse(ChannelHandlerContext ctx, FullHttpRequest up
}
Http2Settings settings = decodeSettingsHeader(ctx, upgradeHeaders.get(0));
connectionHandler.onHttpServerUpgrade(settings);
- // Everything looks good, no need to modify the response.
- } catch (Throwable e) {
- // Send a failed response back to the client.
- upgradeResponse.setStatus(BAD_REQUEST);
- upgradeResponse.headers().clear();
+ // Everything looks good.
+ return true;
+ } catch (Throwable cause) {
+ logger.info("Error during upgrade to HTTP/2", cause);
+ return false;
}
}
| null | train | train | 2016-03-17T10:50:07 | 2016-03-13T02:36:04Z | ejona86 | val |
netty/netty/4994_4995 | netty/netty | netty/netty/4994 | netty/netty/4995 | [
"timestamp(timedelta=18.0, similarity=0.8741306225759525)"
] | ed9d6c79bca67c181b1325a5059af47d40952a01 | d61e4b1e6b92c5fb9fb110a520c4ba0b0339985a | [
"Fixed by https://github.com/netty/netty/pull/4995\n"
] | [
"actually I think the remove should still happen after the `initChannel(...)`\n",
"ensure you close the server and client channel.\n",
"ensure you close the server and client channel.\n",
"Also you can share most of the test-code and just use a different ChannelInitializer for each of them.\n"
] | 2016-03-16T16:26:20Z | [
"defect"
] | ChannelInitializer prevents propagation of channelRegistered event | If I understand Netty's event handling correctly, in general it doesn't (shouldn't?) matter whether a channel handler was added to the pipeline via `addFirst` or `addLast`, the rules of event propagation are the same. This is not the case though when the pipeline is modified within `ChannelInitializer::initChannel`. `ChannelInitializer` is implemented as a handler, it's part of the pipeline when `initChannel` is invoked and it removes itself afterwards. With the current implementation of `ChannelInitializer::channelRegistered`, handlers added via `addFirst` do not receive the channelRegistered event.
Tested with current HEAD of Netty's v4.1 branch.
% java -version
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)
% uname -a
Linux marvin 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3 (2016-01-17) x86_64 GNU/Linux
| [
"transport/src/main/java/io/netty/channel/ChannelInitializer.java"
] | [
"transport/src/main/java/io/netty/channel/ChannelInitializer.java"
] | [
"transport/src/test/java/io/netty/channel/ChannelInitializerTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/ChannelInitializer.java b/transport/src/main/java/io/netty/channel/ChannelInitializer.java
index 5db2c291805..a23b0712271 100644
--- a/transport/src/main/java/io/netty/channel/ChannelInitializer.java
+++ b/transport/src/main/java/io/netty/channel/ChannelInitializer.java
@@ -67,7 +67,7 @@ public abstract class ChannelInitializer<C extends Channel> extends ChannelInbou
public final void channelRegistered(ChannelHandlerContext ctx) throws Exception {
initChannel((C) ctx.channel());
ctx.pipeline().remove(this);
- ctx.fireChannelRegistered();
+ ctx.pipeline().fireChannelRegistered();
}
/**
| diff --git a/transport/src/test/java/io/netty/channel/ChannelInitializerTest.java b/transport/src/test/java/io/netty/channel/ChannelInitializerTest.java
new file mode 100644
index 00000000000..03d6f014e82
--- /dev/null
+++ b/transport/src/test/java/io/netty/channel/ChannelInitializerTest.java
@@ -0,0 +1,107 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.bootstrap.ServerBootstrap;
+import io.netty.channel.local.LocalAddress;
+import io.netty.channel.local.LocalChannel;
+import io.netty.channel.local.LocalServerChannel;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import static org.junit.Assert.assertEquals;
+
+public class ChannelInitializerTest {
+ private static final int TIMEOUT_MILLIS = 1000;
+ private static final LocalAddress SERVER_ADDRESS = new LocalAddress("addr");
+ private EventLoopGroup group;
+ private ServerBootstrap server;
+ private Bootstrap client;
+ private InspectableHandler testHandler;
+
+ @Before
+ public void setUp() {
+ group = new DefaultEventLoopGroup(1);
+ server = new ServerBootstrap()
+ .group(group)
+ .channel(LocalServerChannel.class)
+ .localAddress(SERVER_ADDRESS);
+ client = new Bootstrap()
+ .group(group)
+ .channel(LocalChannel.class)
+ .handler(new ChannelInboundHandlerAdapter());
+ testHandler = new InspectableHandler();
+ }
+
+ @After
+ public void tearDown() {
+ group.shutdownGracefully(0, TIMEOUT_MILLIS, TimeUnit.MILLISECONDS).syncUninterruptibly();
+ }
+
+ @Test(timeout = TIMEOUT_MILLIS)
+ public void firstHandlerInPipelineShouldReceiveChannelRegisteredEvent() {
+ testChannelRegisteredEventPropagation(new ChannelInitializer<LocalChannel>() {
+ @Override
+ public void initChannel(LocalChannel channel) {
+ channel.pipeline().addFirst(testHandler);
+ }
+ });
+ }
+
+ @Test(timeout = TIMEOUT_MILLIS)
+ public void lastHandlerInPipelineShouldReceiveChannelRegisteredEvent() {
+ testChannelRegisteredEventPropagation(new ChannelInitializer<LocalChannel>() {
+ @Override
+ public void initChannel(LocalChannel channel) {
+ channel.pipeline().addLast(testHandler);
+ }
+ });
+ }
+
+ private void testChannelRegisteredEventPropagation(ChannelInitializer<LocalChannel> init) {
+ Channel clientChannel = null, serverChannel = null;
+ try {
+ server.childHandler(init);
+ serverChannel = server.bind().syncUninterruptibly().channel();
+ clientChannel = client.connect(SERVER_ADDRESS).syncUninterruptibly().channel();
+ assertEquals(1, testHandler.channelRegisteredCount.get());
+ } finally {
+ closeChannel(clientChannel);
+ closeChannel(serverChannel);
+ }
+ }
+
+ private static void closeChannel(Channel c) {
+ if (c != null) {
+ c.close().syncUninterruptibly();
+ }
+ }
+
+ private static final class InspectableHandler extends ChannelDuplexHandler {
+ final AtomicInteger channelRegisteredCount = new AtomicInteger(0);
+
+ @Override
+ public void channelRegistered(ChannelHandlerContext ctx) {
+ channelRegisteredCount.incrementAndGet();
+ ctx.fireChannelRegistered();
+ }
+ }
+}
| train | train | 2016-03-17T10:50:07 | 2016-03-16T16:19:29Z | tbcs | val |
netty/netty/4418_5012 | netty/netty | netty/netty/4418 | netty/netty/5012 | [
"timestamp(timedelta=29.0, similarity=0.9550468039623317)"
] | 9ebb4b7164b39a6ef9719d6ac15624ba80597191 | e18c05cdb8f8df7fb7e5dd1c7b464a53e7fe1d1d | [
"tip for you: current slow-path calls 2x Thread.currentThread(). have fun\n",
"@normanmaurer - Any updates on this issue? Didn't have have a PR related to this?\n",
"Nope lost track on this one. Should pock it up again\n\n> Am 04.02.2016 um 12:57 schrieb Scott Mitchell notifications@github.com:\n> \n> @normanma... | [
"Modified to put the same data to jdkThreadLocals and fastThreadLocals.\n",
"`final` is the key of the improvement.\n",
"@windie thanks for highlight this!\n",
"+1\n",
"@trustin can you comment why you not did this before when implement this ? \n",
"@normanmaurer It's to prevent `slowThreadLocalMap` from ... | 2016-03-22T03:25:42Z | [
"improvement"
] | Improve performance of FastThreadLocal slow path | Issues #4402 and #4417 gave some indication that there might be room for performance improvement on the slow path of `FastThreadLocal`.
The performance on the slow path might be critical if a user of Netty plugs in an `Executor` that does not use `FastThreadLocalThreads`.
| [
"common/src/main/java/io/netty/util/internal/InternalThreadLocalMap.java",
"common/src/main/java/io/netty/util/internal/UnpaddedInternalThreadLocalMap.java",
"microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalBenchmark.java",
"microbench/src/main/java/io/netty/microbench/util/AbstractMicr... | [
"common/src/main/java/io/netty/util/internal/InternalThreadLocalMap.java",
"common/src/main/java/io/netty/util/internal/UnpaddedInternalThreadLocalMap.java",
"microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalFastPathBenchmark.java",
"microbench/src/main/java/io/netty/microbench/concurren... | [] | diff --git a/common/src/main/java/io/netty/util/internal/InternalThreadLocalMap.java b/common/src/main/java/io/netty/util/internal/InternalThreadLocalMap.java
index dbd4b3d6bdf..25bf5375563 100644
--- a/common/src/main/java/io/netty/util/internal/InternalThreadLocalMap.java
+++ b/common/src/main/java/io/netty/util/internal/InternalThreadLocalMap.java
@@ -41,18 +41,10 @@ public final class InternalThreadLocalMap extends UnpaddedInternalThreadLocalMap
public static InternalThreadLocalMap getIfSet() {
Thread thread = Thread.currentThread();
- InternalThreadLocalMap threadLocalMap;
if (thread instanceof FastThreadLocalThread) {
- threadLocalMap = ((FastThreadLocalThread) thread).threadLocalMap();
- } else {
- ThreadLocal<InternalThreadLocalMap> slowThreadLocalMap = UnpaddedInternalThreadLocalMap.slowThreadLocalMap;
- if (slowThreadLocalMap == null) {
- threadLocalMap = null;
- } else {
- threadLocalMap = slowThreadLocalMap.get();
- }
+ return ((FastThreadLocalThread) thread).threadLocalMap();
}
- return threadLocalMap;
+ return slowThreadLocalMap.get();
}
public static InternalThreadLocalMap get() {
@@ -74,11 +66,6 @@ private static InternalThreadLocalMap fastGet(FastThreadLocalThread thread) {
private static InternalThreadLocalMap slowGet() {
ThreadLocal<InternalThreadLocalMap> slowThreadLocalMap = UnpaddedInternalThreadLocalMap.slowThreadLocalMap;
- if (slowThreadLocalMap == null) {
- UnpaddedInternalThreadLocalMap.slowThreadLocalMap =
- slowThreadLocalMap = new ThreadLocal<InternalThreadLocalMap>();
- }
-
InternalThreadLocalMap ret = slowThreadLocalMap.get();
if (ret == null) {
ret = new InternalThreadLocalMap();
@@ -92,15 +79,12 @@ public static void remove() {
if (thread instanceof FastThreadLocalThread) {
((FastThreadLocalThread) thread).setThreadLocalMap(null);
} else {
- ThreadLocal<InternalThreadLocalMap> slowThreadLocalMap = UnpaddedInternalThreadLocalMap.slowThreadLocalMap;
- if (slowThreadLocalMap != null) {
- slowThreadLocalMap.remove();
- }
+ slowThreadLocalMap.remove();
}
}
public static void destroy() {
- slowThreadLocalMap = null;
+ slowThreadLocalMap.remove();
}
public static int nextVariableIndex() {
diff --git a/common/src/main/java/io/netty/util/internal/UnpaddedInternalThreadLocalMap.java b/common/src/main/java/io/netty/util/internal/UnpaddedInternalThreadLocalMap.java
index c75ac40ab53..2d0bb6cd156 100644
--- a/common/src/main/java/io/netty/util/internal/UnpaddedInternalThreadLocalMap.java
+++ b/common/src/main/java/io/netty/util/internal/UnpaddedInternalThreadLocalMap.java
@@ -32,7 +32,7 @@
*/
class UnpaddedInternalThreadLocalMap {
- static ThreadLocal<InternalThreadLocalMap> slowThreadLocalMap;
+ static final ThreadLocal<InternalThreadLocalMap> slowThreadLocalMap = new ThreadLocal<InternalThreadLocalMap>();
static final AtomicInteger nextIndex = new AtomicInteger();
/** Used by {@link FastThreadLocal} */
diff --git a/microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalBenchmark.java b/microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalFastPathBenchmark.java
similarity index 87%
rename from microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalBenchmark.java
rename to microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalFastPathBenchmark.java
index 2993a08be5f..c0073fb655c 100644
--- a/microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalBenchmark.java
+++ b/microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalFastPathBenchmark.java
@@ -24,11 +24,11 @@
import java.util.Random;
/**
- * This class benchmarks different allocators with different allocation sizes.
+ * This class benchmarks the fast path of FastThreadLocal and the JDK ThreadLocal.
*/
@Threads(4)
@Measurement(iterations = 10, batchSize = 100)
-public class FastThreadLocalBenchmark extends AbstractMicrobenchmark {
+public class FastThreadLocalFastPathBenchmark extends AbstractMicrobenchmark {
private static final Random rand = new Random();
@@ -39,19 +39,17 @@ public class FastThreadLocalBenchmark extends AbstractMicrobenchmark {
static {
for (int i = 0; i < jdkThreadLocals.length; i ++) {
+ final int num = rand.nextInt();
jdkThreadLocals[i] = new ThreadLocal<Integer>() {
@Override
protected Integer initialValue() {
- return rand.nextInt();
+ return num;
}
};
- }
-
- for (int i = 0; i < fastThreadLocals.length; i ++) {
fastThreadLocals[i] = new FastThreadLocal<Integer>() {
@Override
protected Integer initialValue() {
- return rand.nextInt();
+ return num;
}
};
}
diff --git a/microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalSlowPathBenchmark.java b/microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalSlowPathBenchmark.java
new file mode 100644
index 00000000000..20d414aa1f8
--- /dev/null
+++ b/microbench/src/main/java/io/netty/microbench/concurrent/FastThreadLocalSlowPathBenchmark.java
@@ -0,0 +1,79 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.microbench.concurrent;
+
+import io.netty.microbench.util.AbstractMicrobenchmark;
+import io.netty.util.concurrent.FastThreadLocal;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Threads;
+
+import java.util.Random;
+
+/**
+ * This class benchmarks the slow path of FastThreadLocal and the JDK ThreadLocal.
+ */
+@Threads(4)
+@Measurement(iterations = 10, batchSize = 100)
+public class FastThreadLocalSlowPathBenchmark extends AbstractMicrobenchmark {
+
+ private static final Random rand = new Random();
+
+ @SuppressWarnings("unchecked")
+ private static final ThreadLocal<Integer>[] jdkThreadLocals = new ThreadLocal[128];
+ @SuppressWarnings("unchecked")
+ private static final FastThreadLocal<Integer>[] fastThreadLocals = new FastThreadLocal[jdkThreadLocals.length];
+
+ static {
+ for (int i = 0; i < jdkThreadLocals.length; i ++) {
+ final int num = rand.nextInt();
+ jdkThreadLocals[i] = new ThreadLocal<Integer>() {
+ @Override
+ protected Integer initialValue() {
+ return num;
+ }
+ };
+ fastThreadLocals[i] = new FastThreadLocal<Integer>() {
+ @Override
+ protected Integer initialValue() {
+ return num;
+ }
+ };
+ }
+ }
+
+ public FastThreadLocalSlowPathBenchmark() {
+ super(false, true);
+ }
+
+ @Benchmark
+ public int jdkThreadLocalGet() {
+ int result = 0;
+ for (ThreadLocal<Integer> i: jdkThreadLocals) {
+ result += i.get();
+ }
+ return result;
+ }
+
+ @Benchmark
+ public int fastThreadLocal() {
+ int result = 0;
+ for (FastThreadLocal<Integer> i: fastThreadLocals) {
+ result += i.get();
+ }
+ return result;
+ }
+}
diff --git a/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java b/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java
index eea5b16e6f3..7c9dcf2d1ed 100644
--- a/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java
+++ b/microbench/src/main/java/io/netty/microbench/util/AbstractMicrobenchmark.java
@@ -32,17 +32,6 @@
public class AbstractMicrobenchmark extends AbstractMicrobenchmarkBase {
protected static final int DEFAULT_FORKS = 2;
- protected static final String[] JVM_ARGS;
-
- static {
- final String[] customArgs = {
- "-Xms768m", "-Xmx768m", "-XX:MaxDirectMemorySize=768m", "-Djmh.executor=CUSTOM",
- "-Djmh.executor.class=io.netty.microbench.util.AbstractMicrobenchmark$HarnessExecutor" };
-
- JVM_ARGS = new String[BASE_JVM_ARGS.length + customArgs.length];
- System.arraycopy(BASE_JVM_ARGS, 0, JVM_ARGS, 0, BASE_JVM_ARGS.length);
- System.arraycopy(customArgs, 0, JVM_ARGS, BASE_JVM_ARGS.length, customArgs.length);
- }
public static final class HarnessExecutor extends ThreadPoolExecutor {
public HarnessExecutor(int maxThreads, String prefix) {
@@ -52,27 +41,36 @@ public HarnessExecutor(int maxThreads, String prefix) {
}
}
- private final boolean disableAssertions;
- private String[] jvmArgsWithNoAssertions;
+ private final String[] jvmArgs;
public AbstractMicrobenchmark() {
- this(false);
+ this(false, false);
}
public AbstractMicrobenchmark(boolean disableAssertions) {
- this.disableAssertions = disableAssertions;
+ this(disableAssertions, false);
}
- @Override
- protected String[] jvmArgs() {
- if (!disableAssertions) {
- return JVM_ARGS;
+ public AbstractMicrobenchmark(boolean disableAssertions, boolean disableHarnessExecutor) {
+ final String[] customArgs;
+ if (disableHarnessExecutor) {
+ customArgs = new String[]{"-Xms768m", "-Xmx768m", "-XX:MaxDirectMemorySize=768m"};
+ } else {
+ customArgs = new String[]{"-Xms768m", "-Xmx768m", "-XX:MaxDirectMemorySize=768m", "-Djmh.executor=CUSTOM",
+ "-Djmh.executor.class=io.netty.microbench.util.AbstractMicrobenchmark$HarnessExecutor"};
}
-
- if (jvmArgsWithNoAssertions == null) {
- jvmArgsWithNoAssertions = removeAssertions(JVM_ARGS);
+ String[] jvmArgs = new String[BASE_JVM_ARGS.length + customArgs.length];
+ System.arraycopy(BASE_JVM_ARGS, 0, jvmArgs, 0, BASE_JVM_ARGS.length);
+ System.arraycopy(customArgs, 0, jvmArgs, BASE_JVM_ARGS.length, customArgs.length);
+ if (disableAssertions) {
+ jvmArgs = removeAssertions(jvmArgs);
}
- return jvmArgsWithNoAssertions;
+ this.jvmArgs = jvmArgs;
+ }
+
+ @Override
+ protected String[] jvmArgs() {
+ return jvmArgs;
}
@Override
| null | train | train | 2016-03-22T21:12:10 | 2015-10-29T20:53:36Z | buchgr | val |
netty/netty/5013_5020 | netty/netty | netty/netty/5013 | netty/netty/5020 | [
"timestamp(timedelta=26.0, similarity=0.9396069210120149)"
] | 9ebb4b7164b39a6ef9719d6ac15624ba80597191 | d1d2431e82a26184d893de72b91ceceb68d63415 | [
"Thanks for reporting. I will take care.\n"
] | [] | 2016-03-23T06:52:47Z | [
"cleanup"
] | Minor typo in DefaultStompFrame.toString() | Hi,
in current Netty 4.1.x, the DefaultStompFrame.toString() method writes a different class name (Default**Full**StompFrame) than it actually is... I think it's a minor typo, however for new users of the Stomp codec it's confusing when you cannot locate the class at first.
https://github.com/netty/netty/blob/4.1/codec-stomp/src/main/java/io/netty/handler/codec/stomp/DefaultStompFrame.java#L97
Best regards,
Stefan
| [
"codec-stomp/src/main/java/io/netty/handler/codec/stomp/DefaultStompFrame.java"
] | [
"codec-stomp/src/main/java/io/netty/handler/codec/stomp/DefaultStompFrame.java"
] | [] | diff --git a/codec-stomp/src/main/java/io/netty/handler/codec/stomp/DefaultStompFrame.java b/codec-stomp/src/main/java/io/netty/handler/codec/stomp/DefaultStompFrame.java
index f4009608a32..5295cb64157 100644
--- a/codec-stomp/src/main/java/io/netty/handler/codec/stomp/DefaultStompFrame.java
+++ b/codec-stomp/src/main/java/io/netty/handler/codec/stomp/DefaultStompFrame.java
@@ -94,7 +94,7 @@ public boolean release(int decrement) {
@Override
public String toString() {
- return "DefaultFullStompFrame{" +
+ return "DefaultStompFrame{" +
"command=" + command +
", headers=" + headers +
", content=" + content.toString(CharsetUtil.UTF_8) +
| null | train | train | 2016-03-22T21:12:10 | 2016-03-22T09:58:40Z | ghost | val |
netty/netty/5029_5030 | netty/netty | netty/netty/5029 | netty/netty/5030 | [
"timestamp(timedelta=128.0, similarity=0.9100040757407813)"
] | 3d115349b51f86d7bd3506b0c079f8efe903a820 | 74e5d945a9a1db1476e1b4fd9b30a71219b3fd00 | [] | [] | 2016-03-23T19:29:16Z | [
"defect"
] | EpollChannelOption.TCP_QUICKACK is set as an Integer | Should be a Boolean.
Reported here:
https://github.com/netty/netty-tcnative/issues/128
| [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java"
] | [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java"
] | [] | diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
index 2598d53a1c0..261a6f7796e 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelOption.java
@@ -35,7 +35,7 @@ public final class EpollChannelOption<T> extends ChannelOption<T> {
public static final ChannelOption<Boolean> IP_FREEBIND = ChannelOption.valueOf("IP_FREEBIND");
public static final ChannelOption<Integer> TCP_FASTOPEN = valueOf(T, "TCP_FASTOPEN");
public static final ChannelOption<Integer> TCP_DEFER_ACCEPT = ChannelOption.valueOf(T, "TCP_DEFER_ACCEPT");
- public static final ChannelOption<Integer> TCP_QUICKACK = ChannelOption.valueOf(T, "TCP_QUICKACK");
+ public static final ChannelOption<Boolean> TCP_QUICKACK = ChannelOption.valueOf(T, "TCP_QUICKACK");
public static final ChannelOption<DomainSocketReadMode> DOMAIN_SOCKET_READ_MODE =
ChannelOption.valueOf(T, "DOMAIN_SOCKET_READ_MODE");
| null | train | train | 2016-03-23T18:06:38 | 2016-03-23T17:52:04Z | normanmaurer | val |
netty/netty/3095_5047 | netty/netty | netty/netty/3095 | netty/netty/5047 | [
"timestamp(timedelta=12.0, similarity=0.8956242991425641)"
] | 5d76daf33b645248a1cde150cbefbd3c24824f78 | cc84e1d3db3ac47d05163de87e4e3e3a4151d678 | [
"@trustin @normanmaurer - Not saying we have the bandwidth for this now, but WDYT?\n",
"@Scottmitch sounds good to me :+1: \n",
"@Scottmitch sounds good to me. I wonder it will change our internal logging API though.\n",
"Yeah we should be careful about this.\n-- \nNorman Maurer\n\nOn 3 Nov 2014 at 09:09:18, ... | [
"package private and final?\n",
"does `core` belong here, or should this just be optional and/or for testing?\n",
"It should be just for testing. Updated.\n",
" Make the \"logger\" logger private static fin... | 2016-03-26T06:26:14Z | [
"improvement"
] | Use Log4j2 for logging | Log4j2 came out a while ago an claims to have significant [performance benefits](http://logging.apache.org/log4j/2.x/manual/async.html#Performance) over SLF4j variants (logback). The new architecture claims to be much more asynchronous in nature and they are using the LMAX Disruptor to reduce contention in a multithreaded environment. They also have an [slf4j adapter](http://logging.apache.org/log4j/2.x/log4j-to-slf4j/index.html) which may ease the initial transition. It may be worth doing some investigation to see how much netty could benefit from this, what netty versions would be eligible for this change, and how big of an effort this would be.
| [
"common/pom.xml",
"pom.xml"
] | [
"common/pom.xml",
"common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java",
"common/src/main/java/io/netty/util/internal/logging/Log4J2LoggerFactory.java",
"pom.xml"
] | [
"common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerFactoryTest.java",
"common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerTest.java"
] | diff --git a/common/pom.xml b/common/pom.xml
index a9f9fda6186..876faf3e5b1 100644
--- a/common/pom.xml
+++ b/common/pom.xml
@@ -61,6 +61,16 @@
<artifactId>log4j</artifactId>
<optional>true</optional>
</dependency>
+ <dependency>
+ <groupId>org.apache.logging.log4j</groupId>
+ <artifactId>log4j-api</artifactId>
+ <optional>true</optional>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.logging.log4j</groupId>
+ <artifactId>log4j-core</artifactId>
+ <scope>test</scope>
+ </dependency>
</dependencies>
<build>
diff --git a/common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java b/common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java
new file mode 100644
index 00000000000..64bdfed2e06
--- /dev/null
+++ b/common/src/main/java/io/netty/util/internal/logging/Log4J2Logger.java
@@ -0,0 +1,180 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.util.internal.logging;
+
+import org.apache.logging.log4j.Logger;
+
+final class Log4J2Logger extends AbstractInternalLogger {
+
+ private static final long serialVersionUID = 5485418394879791397L;
+
+ private final transient Logger logger;
+
+ Log4J2Logger(Logger logger) {
+ super(logger.getName());
+ this.logger = logger;
+ }
+
+ @Override
+ public boolean isTraceEnabled() {
+ return logger.isTraceEnabled();
+ }
+
+ @Override
+ public void trace(String msg) {
+ logger.trace(msg);
+ }
+
+ @Override
+ public void trace(String format, Object arg) {
+ logger.trace(format, arg);
+ }
+
+ @Override
+ public void trace(String format, Object argA, Object argB) {
+ logger.trace(format, argA, argB);
+ }
+
+ @Override
+ public void trace(String format, Object... arguments) {
+ logger.trace(format, arguments);
+ }
+
+ @Override
+ public void trace(String msg, Throwable t) {
+ logger.trace(msg, t);
+ }
+
+ @Override
+ public boolean isDebugEnabled() {
+ return logger.isDebugEnabled();
+ }
+
+ @Override
+ public void debug(String msg) {
+ logger.debug(msg);
+ }
+
+ @Override
+ public void debug(String format, Object arg) {
+ logger.debug(format, arg);
+ }
+
+ @Override
+ public void debug(String format, Object argA, Object argB) {
+ logger.debug(format, argA, argB);
+ }
+
+ @Override
+ public void debug(String format, Object... arguments) {
+ logger.debug(format, arguments);
+ }
+
+ @Override
+ public void debug(String msg, Throwable t) {
+ logger.debug(msg, t);
+ }
+
+ @Override
+ public boolean isInfoEnabled() {
+ return logger.isInfoEnabled();
+ }
+
+ @Override
+ public void info(String msg) {
+ logger.info(msg);
+ }
+
+ @Override
+ public void info(String format, Object arg) {
+ logger.info(format, arg);
+ }
+
+ @Override
+ public void info(String format, Object argA, Object argB) {
+ logger.info(format, argA, argB);
+ }
+
+ @Override
+ public void info(String format, Object... arguments) {
+ logger.info(format, arguments);
+ }
+
+ @Override
+ public void info(String msg, Throwable t) {
+ logger.info(msg, t);
+ }
+
+ @Override
+ public boolean isWarnEnabled() {
+ return logger.isWarnEnabled();
+ }
+
+ @Override
+ public void warn(String msg) {
+ logger.warn(msg);
+ }
+
+ @Override
+ public void warn(String format, Object arg) {
+ logger.warn(format, arg);
+ }
+
+ @Override
+ public void warn(String format, Object... arguments) {
+ logger.warn(format, arguments);
+ }
+
+ @Override
+ public void warn(String format, Object argA, Object argB) {
+ logger.warn(format, argA, argB);
+ }
+
+ @Override
+ public void warn(String msg, Throwable t) {
+ logger.warn(msg, t);
+ }
+
+ @Override
+ public boolean isErrorEnabled() {
+ return logger.isErrorEnabled();
+ }
+
+ @Override
+ public void error(String msg) {
+ logger.error(msg);
+ }
+
+ @Override
+ public void error(String format, Object arg) {
+ logger.error(format, arg);
+ }
+
+ @Override
+ public void error(String format, Object argA, Object argB) {
+ logger.error(format, argA, argB);
+ }
+
+ @Override
+ public void error(String format, Object... arguments) {
+ logger.error(format, arguments);
+ }
+
+ @Override
+ public void error(String msg, Throwable t) {
+ logger.error(msg, t);
+ }
+}
diff --git a/common/src/main/java/io/netty/util/internal/logging/Log4J2LoggerFactory.java b/common/src/main/java/io/netty/util/internal/logging/Log4J2LoggerFactory.java
new file mode 100644
index 00000000000..6dfda57af43
--- /dev/null
+++ b/common/src/main/java/io/netty/util/internal/logging/Log4J2LoggerFactory.java
@@ -0,0 +1,26 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.util.internal.logging;
+
+import org.apache.logging.log4j.LogManager;
+
+public final class Log4J2LoggerFactory extends InternalLoggerFactory {
+
+ @Override
+ public InternalLogger newInstance(String name) {
+ return new Log4J2Logger(LogManager.getLogger(name));
+ }
+}
diff --git a/pom.xml b/pom.xml
index c97a2da3379..b7e2c945520 100644
--- a/pom.xml
+++ b/pom.xml
@@ -382,6 +382,11 @@
<artifactId>commons-logging</artifactId>
<version>1.1.3</version>
</dependency>
+ <dependency>
+ <groupId>org.apache.logging.log4j</groupId>
+ <artifactId>log4j-api</artifactId>
+ <version>2.3</version>
+ </dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
@@ -522,6 +527,14 @@
<version>1.5.7</version>
<scope>test</scope>
</dependency>
+
+ <!-- Test dependency for log4j2 tests -->
+ <dependency>
+ <groupId>org.apache.logging.log4j</groupId>
+ <artifactId>log4j-core</artifactId>
+ <version>2.3</version>
+ <scope>test</scope>
+ </dependency>
</dependencies>
</dependencyManagement>
| diff --git a/common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerFactoryTest.java b/common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerFactoryTest.java
new file mode 100644
index 00000000000..e0db431e9d7
--- /dev/null
+++ b/common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerFactoryTest.java
@@ -0,0 +1,31 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.util.internal.logging;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+public class Log4J2LoggerFactoryTest {
+
+ @Test
+ public void testCreation() {
+ InternalLogger logger = new Log4J2LoggerFactory().newInstance("foo");
+ assertTrue(logger instanceof Log4J2Logger);
+ assertEquals("foo", logger.name());
+ }
+}
diff --git a/common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerTest.java b/common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerTest.java
new file mode 100644
index 00000000000..96ee4346c8f
--- /dev/null
+++ b/common/src/test/java/io/netty/util/internal/logging/Log4J2LoggerTest.java
@@ -0,0 +1,224 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.util.internal.logging;
+
+import org.junit.Test;
+import org.apache.logging.log4j.Logger;
+
+import static org.easymock.EasyMock.createStrictMock;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.replay;
+import static org.easymock.EasyMock.verify;
+import static org.junit.Assert.assertTrue;
+
+public class Log4J2LoggerTest {
+ private static final Exception e = new Exception();
+
+ @Test
+ public void testIsTraceEnabled() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ expect(mock.isTraceEnabled()).andReturn(true);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ assertTrue(logger.isTraceEnabled());
+ verify(mock);
+ }
+
+ @Test
+ public void testIsDebugEnabled() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ expect(mock.isDebugEnabled()).andReturn(true);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ assertTrue(logger.isDebugEnabled());
+ verify(mock);
+ }
+
+ @Test
+ public void testIsInfoEnabled() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ expect(mock.isInfoEnabled()).andReturn(true);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ assertTrue(logger.isInfoEnabled());
+ verify(mock);
+ }
+
+ @Test
+ public void testIsWarnEnabled() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ expect(mock.isWarnEnabled()).andReturn(true);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ assertTrue(logger.isWarnEnabled());
+ verify(mock);
+ }
+
+ @Test
+ public void testIsErrorEnabled() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ expect(mock.isErrorEnabled()).andReturn(true);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ assertTrue(logger.isErrorEnabled());
+ verify(mock);
+ }
+
+ @Test
+ public void testTrace() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.trace("a");
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.trace("a");
+ verify(mock);
+ }
+
+ @Test
+ public void testTraceWithException() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.trace("a", e);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.trace("a", e);
+ verify(mock);
+ }
+
+ @Test
+ public void testDebug() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.debug("a");
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.debug("a");
+ verify(mock);
+ }
+
+ @Test
+ public void testDebugWithException() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.debug("a", e);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.debug("a", e);
+ verify(mock);
+ }
+
+ @Test
+ public void testInfo() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.info("a");
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.info("a");
+ verify(mock);
+ }
+
+ @Test
+ public void testInfoWithException() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.info("a", e);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.info("a", e);
+ verify(mock);
+ }
+
+ @Test
+ public void testWarn() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.warn("a");
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.warn("a");
+ verify(mock);
+ }
+
+ @Test
+ public void testWarnWithException() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.warn("a", e);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.warn("a", e);
+ verify(mock);
+ }
+
+ @Test
+ public void testError() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.error("a");
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.error("a");
+ verify(mock);
+ }
+
+ @Test
+ public void testErrorWithException() {
+ Logger mock = createStrictMock(Logger.class);
+
+ expect(mock.getName()).andReturn("foo");
+ mock.error("a", e);
+ replay(mock);
+
+ InternalLogger logger = new Log4J2Logger(mock);
+ logger.error("a", e);
+ verify(mock);
+ }
+}
| train | train | 2016-03-31T00:01:25 | 2014-11-03T01:34:39Z | Scottmitch | val |
netty/netty/5066_5077 | netty/netty | netty/netty/5066 | netty/netty/5077 | [
"timestamp(timedelta=473.0, similarity=0.9999999999999998)"
] | 516e4933c4044d6da3b66aebee567c7dc8712c9d | cd5633232f9f836292661441f3525e858f494414 | [
"thanks for reporting @rkapsi - let me take care\n",
"Fixed by https://github.com/netty/netty/pull/5077\n"
] | [
"LGTM but just to throw it out there... SslHandler could fire a SslHandshakeCompletionEvent with a reference to itself. It may or may not be \"cheaper\" than the `ctx.pipeline().get(SslHandler.class)` lookup.\n",
"IIUC this would require a change to `SslHandshakeCompletionEvent`. We can consider this as a followu... | 2016-04-04T20:40:06Z | [
"defect"
] | ApplicationProtocolNegotiationHandler doesn't work with SniHandler | Netty 4.1.0.Final-SNAPSHOT
The `ApplicationProtocolNegotiationHandler` is expecting the `SslHandler` to be present at pipeline construction time (see `#handlerAdded(...)`) and consequently fails with the `SniHandler` because it's adding it at a later point in time.
``` java
Caused by: java.lang.IllegalStateException: cannot find a SslHandler in the pipeline (required for application-level protocol negotiation)
at io.netty.handler.ssl.ApplicationProtocolNegotiationHandler.handlerAdded(ApplicationProtocolNegotiationHandler.java:85) ~[netty-all-4.1.0.Final-SNAPSHOT.jar:4.1.0.Final-SNAPSHOT]
at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:723) [netty-all-4.1.0.Final-SNAPSHOT.jar:4.1.0.Final-SNAPSHOT]
... 19 common frames omitted
```
IMHO that whole block of code can be moved into `userEventTriggered()`
``` java
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof SslHandshakeCompletionEvent) {
if (sslHandler == null) {
final SslHandler sslHandler = ctx.pipeline().get(SslHandler.class);
if (sslHandler == null) {
throw new IllegalStateException("cannot find a SslHandler in the pipeline (required for application-level protocol negotiation)");
}
this.sslHandler = sslHandler;
}
ctx.pipeline().remove(this);
// ...
}
}
```
| [
"handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java"
] | [
"handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java"
] | diff --git a/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java b/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java
index 7810d5aa4aa..0e3ea0f39a3 100644
--- a/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java
+++ b/handler/src/main/java/io/netty/handler/ssl/ApplicationProtocolNegotiationHandler.java
@@ -65,7 +65,6 @@ public abstract class ApplicationProtocolNegotiationHandler extends ChannelInbou
InternalLoggerFactory.getInstance(ApplicationProtocolNegotiationHandler.class);
private final String fallbackProtocol;
- private SslHandler sslHandler;
/**
* Creates a new instance with the specified fallback protocol name.
@@ -77,18 +76,6 @@ protected ApplicationProtocolNegotiationHandler(String fallbackProtocol) {
this.fallbackProtocol = ObjectUtil.checkNotNull(fallbackProtocol, "fallbackProtocol");
}
- @Override
- public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
- // FIXME: There is no way to tell if the SSL handler is placed before the negotiation handler.
- final SslHandler sslHandler = ctx.pipeline().get(SslHandler.class);
- if (sslHandler == null) {
- throw new IllegalStateException(
- "cannot find a SslHandler in the pipeline (required for application-level protocol negotiation)");
- }
-
- this.sslHandler = sslHandler;
- }
-
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof SslHandshakeCompletionEvent) {
@@ -96,6 +83,11 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc
SslHandshakeCompletionEvent handshakeEvent = (SslHandshakeCompletionEvent) evt;
if (handshakeEvent.isSuccess()) {
+ SslHandler sslHandler = ctx.pipeline().get(SslHandler.class);
+ if (sslHandler == null) {
+ throw new IllegalStateException("cannot find a SslHandler in the pipeline (required for " +
+ "application-level protocol negotiation)");
+ }
String protocol = sslHandler.applicationProtocol();
configurePipeline(ctx, protocol != null? protocol : fallbackProtocol);
} else {
| diff --git a/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java b/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java
index a6ae52345f7..0496e21e885 100644
--- a/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java
+++ b/handler/src/test/java/io/netty/handler/ssl/SniHandlerTest.java
@@ -16,26 +16,61 @@
package io.netty.handler.ssl;
+import io.netty.bootstrap.Bootstrap;
+import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.Unpooled;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelInitializer;
+import io.netty.channel.ChannelPipeline;
+import io.netty.channel.EventLoopGroup;
import io.netty.channel.embedded.EmbeddedChannel;
+import io.netty.channel.nio.NioEventLoopGroup;
+import io.netty.channel.socket.nio.NioServerSocketChannel;
+import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.DecoderException;
+import io.netty.util.DomainMappingBuilder;
import io.netty.util.DomainNameMapping;
import io.netty.util.ReferenceCountUtil;
import org.junit.Test;
-import javax.xml.bind.DatatypeConverter;
import java.io.File;
+import java.net.InetSocketAddress;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+
+import javax.xml.bind.DatatypeConverter;
-import static org.hamcrest.CoreMatchers.*;
-import static org.junit.Assert.*;
+import static org.hamcrest.CoreMatchers.is;
+import static org.hamcrest.CoreMatchers.nullValue;
+import static org.junit.Assert.assertThat;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
public class SniHandlerTest {
+ private static ApplicationProtocolConfig newApnConfig() {
+ return new ApplicationProtocolConfig(
+ ApplicationProtocolConfig.Protocol.ALPN,
+ // NO_ADVERTISE is currently the only mode supported by both OpenSsl and JDK providers.
+ ApplicationProtocolConfig.SelectorFailureBehavior.NO_ADVERTISE,
+ // ACCEPT is currently the only mode supported by both OpenSsl and JDK providers.
+ ApplicationProtocolConfig.SelectedListenerFailureBehavior.ACCEPT,
+ "myprotocol");
+ }
+
private static SslContext makeSslContext() throws Exception {
File keyFile = new File(SniHandlerTest.class.getResource("test_encrypted.pem").getFile());
File crtFile = new File(SniHandlerTest.class.getResource("test.crt").getFile());
- return SslContextBuilder.forServer(crtFile, keyFile, "12345").build();
+ return SslContextBuilder.forServer(crtFile, keyFile, "12345").applicationProtocolConfig(newApnConfig()).build();
+ }
+
+ private static SslContext makeSslClientContext() throws Exception {
+ File crtFile = new File(SniHandlerTest.class.getResource("test.crt").getFile());
+
+ return SslContextBuilder.forClient().trustManager(crtFile).applicationProtocolConfig(newApnConfig()).build();
}
@Test
@@ -123,4 +158,77 @@ public void testFallbackToDefaultContext() throws Exception {
assertThat(handler.hostname(), nullValue());
assertThat(handler.sslContext(), is(nettyContext));
}
+
+ @Test
+ public void testSniWithApnHandler() throws Exception {
+ SslContext nettyContext = makeSslContext();
+ SslContext leanContext = makeSslContext();
+ final SslContext clientContext = makeSslClientContext();
+ SslContext wrappedLeanContext = null;
+ final CountDownLatch apnDoneLatch = new CountDownLatch(1);
+
+ final DomainNameMapping<SslContext> mapping = new DomainMappingBuilder(nettyContext)
+ .add("*.netty.io", nettyContext)
+ .add("*.LEANCLOUD.CN", leanContext).build();
+ // hex dump of a client hello packet, which contains hostname "CHAT4。LEANCLOUD。CN"
+ final String tlsHandshakeMessageHex1 = "16030100";
+ // part 2
+ final String tlsHandshakeMessageHex = "bd010000b90303a74225676d1814ba57faff3b366" +
+ "3656ed05ee9dbb2a4dbb1bb1c32d2ea5fc39e0000000100008c0000001700150000164348" +
+ "415434E380824C45414E434C4F5544E38082434E000b000403000102000a00340032000e0" +
+ "00d0019000b000c00180009000a0016001700080006000700140015000400050012001300" +
+ "0100020003000f0010001100230000000d0020001e0601060206030501050205030401040" +
+ "20403030103020303020102020203000f00010133740000";
+
+ EventLoopGroup group = new NioEventLoopGroup(2);
+ Channel serverChannel = null;
+ Channel clientChannel = null;
+ try {
+ ServerBootstrap sb = new ServerBootstrap();
+ sb.group(group);
+ sb.channel(NioServerSocketChannel.class);
+ sb.childHandler(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ChannelPipeline p = ch.pipeline();
+ p.addLast(new SniHandler(mapping));
+ p.addLast(new ApplicationProtocolNegotiationHandler("foo") {
+ @Override
+ protected void configurePipeline(ChannelHandlerContext ctx, String protocol) throws Exception {
+ apnDoneLatch.countDown();
+ }
+ });
+ }
+ });
+
+ Bootstrap cb = new Bootstrap();
+ cb.group(group);
+ cb.channel(NioSocketChannel.class);
+ cb.handler(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ChannelPipeline p = ch.pipeline();
+ ch.write(Unpooled.wrappedBuffer(DatatypeConverter.parseHexBinary(tlsHandshakeMessageHex1)));
+ ch.writeAndFlush(Unpooled.wrappedBuffer(DatatypeConverter.parseHexBinary(tlsHandshakeMessageHex)));
+ ch.pipeline().addLast(clientContext.newHandler(ch.alloc()));
+ }
+ });
+
+ serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel();
+
+ ChannelFuture ccf = cb.connect(serverChannel.localAddress());
+ assertTrue(ccf.awaitUninterruptibly().isSuccess());
+ clientChannel = ccf.channel();
+
+ assertTrue(apnDoneLatch.await(5, TimeUnit.SECONDS));
+ } finally {
+ if (serverChannel != null) {
+ serverChannel.close().sync();
+ }
+ if (clientChannel != null) {
+ clientChannel.close().sync();
+ }
+ group.shutdownGracefully(0, 0, TimeUnit.MICROSECONDS);
+ }
+ }
}
| val | train | 2016-04-02T07:39:47 | 2016-04-01T22:54:59Z | rkapsi | val |
netty/netty/5084_5086 | netty/netty | netty/netty/5084 | netty/netty/5086 | [
"timestamp(timedelta=36.0, similarity=0.8938965835686057)"
] | 66ce0140749a9b3be5617369cbaa8268f0bc962a | 526078be1bfb9b3e3bec7146fb362c3ee80f946d | [
"@zhangkun83 sounds good. Thanks!\n",
"@zhangkun83 please ensure you sign our icla: http://netty.io/s/icla and use our commit message template http://netty.io/wiki/writing-a-commit-message.html when doing the pr :)\n",
"@zhangkun83 good catch!\n",
"The fix in this PR is a mistake. For just one example, it mea... | [
"@zhangkun83 could you add a \"TODO:\" here? Something like:\n\n`// TODO: Once we can break the API we should add ThreadGroup to the arguments of this method.`\n",
"@zhangkun83 sorry for be a PITA but I just noticed we are missing a null check for threadGroup here.\n\nPlease do:\n`this.threadGroup = ObjectUtil.ch... | 2016-04-06T04:08:17Z | [
"defect"
] | DefaultThreadFactory should stick to the thread group where it was constructed | We (gRPC) encountered a bug that was triggered by https://github.com/grpc/grpc-java/commit/d927180a63ac2df4fae0082b51d51e3836771bb1. After that commit, event loop threads are created per task by `NioEventLoopGroup`, and inherits the thread group of the caller, which in our case is an application-provided request-scope thread. Things go south when the application tries to manipulate (e.g., interrupt and join) all threads of the request-scope thread group, which unexpectedly include the event loop threads.
I think `DefaultThreadFactory` should save the current thread group in constructor, and apply it to all new threads. I will come up with a PR soon.
@buchgr
| [
"common/src/main/java/io/netty/util/concurrent/DefaultThreadFactory.java"
] | [
"common/src/main/java/io/netty/util/concurrent/DefaultThreadFactory.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/concurrent/DefaultThreadFactory.java b/common/src/main/java/io/netty/util/concurrent/DefaultThreadFactory.java
index 84d72cdf242..48966839566 100644
--- a/common/src/main/java/io/netty/util/concurrent/DefaultThreadFactory.java
+++ b/common/src/main/java/io/netty/util/concurrent/DefaultThreadFactory.java
@@ -16,6 +16,7 @@
package io.netty.util.concurrent;
+import io.netty.util.internal.ObjectUtil;
import io.netty.util.internal.StringUtil;
import java.util.Locale;
@@ -33,6 +34,7 @@ public class DefaultThreadFactory implements ThreadFactory {
private final String prefix;
private final boolean daemon;
private final int priority;
+ private final ThreadGroup threadGroup;
public DefaultThreadFactory(Class<?> poolType) {
this(poolType, false, Thread.NORM_PRIORITY);
@@ -82,7 +84,7 @@ private static String toPoolName(Class<?> poolType) {
}
}
- public DefaultThreadFactory(String poolName, boolean daemon, int priority) {
+ public DefaultThreadFactory(String poolName, boolean daemon, int priority, ThreadGroup threadGroup) {
if (poolName == null) {
throw new NullPointerException("poolName");
}
@@ -94,6 +96,11 @@ public DefaultThreadFactory(String poolName, boolean daemon, int priority) {
prefix = poolName + '-' + poolId.incrementAndGet() + '-';
this.daemon = daemon;
this.priority = priority;
+ this.threadGroup = ObjectUtil.checkNotNull(threadGroup, "threadGroup is null");
+ }
+
+ public DefaultThreadFactory(String poolName, boolean daemon, int priority) {
+ this(poolName, daemon, priority, Thread.currentThread().getThreadGroup());
}
@Override
@@ -119,8 +126,9 @@ public Thread newThread(Runnable r) {
return t;
}
+ // TODO: Once we can break the API we should add ThreadGroup to the arguments of this method.
protected Thread newThread(Runnable r, String name) {
- return new FastThreadLocalThread(r, name);
+ return new FastThreadLocalThread(threadGroup, r, name);
}
private static final class DefaultRunnableDecorator implements Runnable {
| null | train | train | 2016-04-06T11:52:06 | 2016-04-05T20:12:08Z | zhangkun83 | val |
netty/netty/5069_5101 | netty/netty | netty/netty/5069 | netty/netty/5101 | [
"timestamp(timedelta=127889.0, similarity=0.8773950767919225)"
] | 562d8d220028fbb3d62028bc5879a121dff2fdbd | d1ba4f213e3e84e17fcd90fe8f46aae9967fb3ff | [
"If the EPOLL approach is acceptable this would be preferred as it would remove a volatile which is written to frequently in the event loop. The only hesitation is that [AbstractNioChannel](https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java#L133) exposes `re... | [
"@Scottmitch why not just remove this and always call `setReadPendin(false)` ?\n",
"~~@normanmaurer - done~~\n",
"@normanmaurer - Of course this can be done but `clearReadPending` uses a cached Runnable `clearReadPendingRunnable` and `setReadPending` has been deprecated in favor of `clearReadPending`. If we don... | 2016-04-08T02:53:28Z | [
"improvement"
] | Make NIO and EPOLL autoReadClear consistent | [AbstractNioChannel readPending](https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java#L64) is volatile because it is directly set when [autoRead is cleared](https://github.com/netty/netty/blob/4.1/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java#L364). The EPOLL transport takes an alternative approach and avoids the need for `readPending` to be volatile. [EpollChannelConfig calls clearEpollIn()](https://github.com/netty/netty/blob/4.1/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollChannelConfig.java#L175) which will run immediately if on the event loop and if not execute a runnable to run on the event loop.
| [
"transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java"
] | [
"transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java
index 981f7c8b15f..d69358167b1 100644
--- a/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java
+++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioChannel.java
@@ -65,7 +65,7 @@ public abstract class AbstractNioChannel extends AbstractChannel {
private final Runnable clearReadPendingRunnable = new Runnable() {
@Override
public void run() {
- readPending = false;
+ clearReadPending0();
}
};
@@ -149,17 +149,17 @@ protected void setReadPending(final boolean readPending) {
if (isRegistered()) {
EventLoop eventLoop = eventLoop();
if (eventLoop.inEventLoop()) {
- this.readPending = readPending;
+ setReadPending0(readPending);
} else {
eventLoop.execute(new OneTimeTask() {
@Override
public void run() {
- AbstractNioChannel.this.readPending = readPending;
+ setReadPending0(readPending);
}
});
}
} else {
- this.readPending = readPending;
+ setReadPending0(readPending);
}
}
@@ -170,16 +170,28 @@ protected final void clearReadPending() {
if (isRegistered()) {
EventLoop eventLoop = eventLoop();
if (eventLoop.inEventLoop()) {
- readPending = false;
+ clearReadPending0();
} else {
eventLoop.execute(clearReadPendingRunnable);
}
} else {
// Best effort if we are not registered yet clear readPending. This happens during channel initialization.
- readPending = false;
+ clearReadPending0();
}
}
+ private void setReadPending0(boolean readPending) {
+ this.readPending = readPending;
+ if (!readPending) {
+ ((AbstractNioUnsafe) unsafe()).removeReadOp();
+ }
+ }
+
+ private void clearReadPending0() {
+ readPending = false;
+ ((AbstractNioUnsafe) unsafe()).removeReadOp();
+ }
+
/**
* Return {@code true} if the input of this {@link Channel} is shutdown
*/
| null | train | train | 2016-04-07T18:42:17 | 2016-04-03T18:15:40Z | Scottmitch | val |
netty/netty/4929_5113 | netty/netty | netty/netty/4929 | netty/netty/5113 | [
"timestamp(timedelta=72.0, similarity=0.8471297436948142)"
] | a0b28d6c82abec5f953e6f5e8df829850756c7dd | be12a54835c1de8bae4460db27776a616b88ecaa | [
"@harbulot As a workaround, you can specify protocols, cipher suites and algorithm downstream, on each `SSLEngine` you get from `SslContext.newEngine`, before passing them to a `SslHandler`.\n\nFYI, that's how things are done in AsyncHttpClient.\n",
"> Another parameter that would be interesting to pass would be ... | [
"@trustin @Scottmitch asked if this is a breaking change but I think no as the constructor was package private. WDYT ?\n",
"Doesn't sound like a breaking change to me because there was no way for a user to extend this class.\n",
"- use -> Use\n- reference -> use\n",
"- use -> Use\n- reference -> use\n"
] | 2016-04-09T05:16:42Z | [
"improvement"
] | Adding ability to use more standard JSSE options when using JdkSslContext | (Please note the case difference between `SSLContext` and `SslContext` when reading this.)
I was trying to pass various SSL/TLS settings (checking the host name, setting a client certificate and a custom trust store) when using a `JdkSslClientContext` with Netty 4.0.29, when I realised it was actually quite difficult to pass an existing `javax.net.ssl.SSLContext` to the existing classes (`JdkSslContext` and its subclasses). (The code seems fairly similar in the latest 4.0 and 4.1 branches.)
It is currently possible to build a `JdkSslClientContext` using a `TrustManagerFactory` or a list of trusted certificates, a key file or a `KeyManagerFactory`, but there doesn't seem to be anywhere to set all this from an existing `SSLContext` instance (which is one of the main JSSE classes). I realise these can be very useful and convenient, but they make it difficult to use existing `SSLContext` classes (which seems more natural when you're used to the JSSE). (The package protection and `final` modifiers also make it quite tricky to subclass these classes to make such modifications.)
In addition, without any specific settings, it doesn't seem to use the "set" default `SSLContext`. Indeed, it initialises a new `TrustManagerFactory` with `(null, null, null)`, but that's not necessarily the same as `SSLContext.getDefault()` (which can be set with `SSLContext.setDefault(...)` since Java 6). From a JSSE point of view, this seems counter intuitive (since the default values would effectively have different settings from what you would get with `SSLServerSocketFactory.getDefault()`, for example).
Another parameter that would be interesting to pass would be an `SSLParameters` instance. It can provide many settings (otherwise repeated) such as the cipher suites. More importantly, it can provide hostname verification for free since Java 7.
Although Netty 4.x only relies on Java 6, being able to use an `SSLParameters` instance would let code that uses Netty and Java 7 or 8 pass this to enforce hostname verification (for HTTPS):
```
SSLParameters sslParameters = new SSLParameters();
sslParameters.setEndpointIdentificationAlgorithm("HTTPS");
```
Would you be interested in contributions and pull requests in that area? (That last point might be relevant w.r.t. issue #3534.)
| [
"handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java",
"handler/src/main/java/io/netty/han... | [
"handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java",
"handler/src/main/java/io/netty/han... | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java
index ed9e9a0a891..90e6914439b 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java
@@ -29,11 +29,13 @@
/**
* A client-side {@link SslContext} which uses JDK's SSL/TLS implementation.
+ *
+ * @deprecated Use {@link SslContextBuilder} to create {@link JdkSslContext} instances and only
+ * use {@link JdkSslContext} in your code.
*/
+@Deprecated
public final class JdkSslClientContext extends JdkSslContext {
- private final SSLContext ctx;
-
/**
* Creates a new instance.
*
@@ -245,26 +247,20 @@ public JdkSslClientContext(File trustCertCollectionFile, TrustManagerFactory tru
File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, JdkApplicationProtocolNegotiator apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
- super(ciphers, cipherFilter, apn, ClientAuth.NONE);
- try {
- ctx = newSSLContext(toX509Certificates(trustCertCollectionFile), trustManagerFactory,
- toX509Certificates(keyCertChainFile), toPrivateKey(keyFile, keyPassword),
- keyPassword, keyManagerFactory, sessionCacheSize, sessionTimeout);
- } catch (Exception e) {
- if (e instanceof SSLException) {
- throw (SSLException) e;
- }
- throw new SSLException("failed to initialize the client-side SSL context", e);
- }
+ super(newSSLContext(toX509CertificatesInternal(
+ trustCertCollectionFile), trustManagerFactory,
+ toX509CertificatesInternal(keyCertChainFile), toPrivateKeyInternal(keyFile, keyPassword),
+ keyPassword, keyManagerFactory, sessionCacheSize, sessionTimeout), true,
+ ciphers, cipherFilter, apn, ClientAuth.NONE);
}
JdkSslClientContext(X509Certificate[] trustCertCollection, TrustManagerFactory trustManagerFactory,
X509Certificate[] keyCertChain, PrivateKey key, String keyPassword,
KeyManagerFactory keyManagerFactory, Iterable<String> ciphers, CipherSuiteFilter cipherFilter,
ApplicationProtocolConfig apn, long sessionCacheSize, long sessionTimeout) throws SSLException {
- super(ciphers, cipherFilter, toNegotiator(apn, false), ClientAuth.NONE);
- ctx = newSSLContext(trustCertCollection, trustManagerFactory, keyCertChain, key, keyPassword,
- keyManagerFactory, sessionCacheSize, sessionTimeout);
+ super(newSSLContext(trustCertCollection, trustManagerFactory, keyCertChain, key, keyPassword,
+ keyManagerFactory, sessionCacheSize, sessionTimeout), true,
+ ciphers, cipherFilter, toNegotiator(apn, false), ClientAuth.NONE);
}
private static SSLContext newSSLContext(X509Certificate[] trustCertCollection,
@@ -298,14 +294,4 @@ private static SSLContext newSSLContext(X509Certificate[] trustCertCollection,
throw new SSLException("failed to initialize the client-side SSL context", e);
}
}
-
- @Override
- public boolean isClient() {
- return true;
- }
-
- @Override
- public SSLContext context() {
- return ctx;
- }
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
index 4389f8aef0a..bf54d1484aa 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
@@ -51,7 +51,7 @@
/**
* An {@link SslContext} which uses JDK's SSL/TLS implementation.
*/
-public abstract class JdkSslContext extends SslContext {
+public class JdkSslContext extends SslContext {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(JdkSslContext.class);
@@ -140,20 +140,60 @@ private static void addIfSupported(Set<String> supported, List<String> enabled,
private final List<String> unmodifiableCipherSuites;
private final JdkApplicationProtocolNegotiator apn;
private final ClientAuth clientAuth;
+ private final SSLContext sslContext;
+ private final boolean isClient;
- JdkSslContext(Iterable<String> ciphers, CipherSuiteFilter cipherFilter, JdkApplicationProtocolNegotiator apn,
- ClientAuth clientAuth) {
+ /**
+ * Creates a new {@link JdkSslContext} from a pre-configured {@link SSLContext}.
+ *
+ * @param sslContext the {@link SSLContext} to use.
+ * @param isClient {@code true} if this context should create {@link SSLEngine}s for client-side usage.
+ * @param clientAuth the {@link ClientAuth} to use. This will only be used when {@param isClient} is {@code false}.
+ */
+ public JdkSslContext(SSLContext sslContext, boolean isClient,
+ ClientAuth clientAuth) {
+ this(sslContext, isClient, null, IdentityCipherSuiteFilter.INSTANCE,
+ JdkDefaultApplicationProtocolNegotiator.INSTANCE, clientAuth);
+ }
+
+ /**
+ * Creates a new {@link JdkSslContext} from a pre-configured {@link SSLContext}.
+ *
+ * @param sslContext the {@link SSLContext} to use.
+ * @param isClient {@code true} if this context should create {@link SSLEngine}s for client-side usage.
+ * @param ciphers the ciphers to use or {@code null} if the standart should be used.
+ * @param cipherFilter the filter to use.
+ * @param apn the {@link ApplicationProtocolConfig} to use.
+ * @param clientAuth the {@link ClientAuth} to use. This will only be used when {@param isClient} is {@code false}.
+ */
+ public JdkSslContext(SSLContext sslContext, boolean isClient, Iterable<String> ciphers,
+ CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
+ ClientAuth clientAuth) {
+ this(sslContext, isClient, ciphers, cipherFilter, toNegotiator(apn, !isClient), clientAuth);
+ }
+
+ JdkSslContext(SSLContext sslContext, boolean isClient, Iterable<String> ciphers, CipherSuiteFilter cipherFilter,
+ JdkApplicationProtocolNegotiator apn, ClientAuth clientAuth) {
this.apn = checkNotNull(apn, "apn");
this.clientAuth = checkNotNull(clientAuth, "clientAuth");
cipherSuites = checkNotNull(cipherFilter, "cipherFilter").filterCipherSuites(
ciphers, DEFAULT_CIPHERS, SUPPORTED_CIPHERS);
unmodifiableCipherSuites = Collections.unmodifiableList(Arrays.asList(cipherSuites));
+ this.sslContext = checkNotNull(sslContext, "sslContext");
+ this.isClient = isClient;
}
/**
* Returns the JDK {@link SSLContext} object held by this context.
*/
- public abstract SSLContext context();
+ public final SSLContext context() {
+ return sslContext;
+ }
+
+ @Override
+ public final boolean isClient() {
+ return isClient;
+ }
/**
* Returns the JDK {@link SSLSessionContext} object held by this context.
@@ -210,7 +250,7 @@ private SSLEngine configureAndWrapEngine(SSLEngine engine) {
}
@Override
- public JdkApplicationProtocolNegotiator applicationProtocolNegotiator() {
+ public final JdkApplicationProtocolNegotiator applicationProtocolNegotiator() {
return apn;
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
index c80665cb159..4434c3a41c4 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
@@ -30,11 +30,13 @@
/**
* A server-side {@link SslContext} which uses JDK's SSL/TLS implementation.
+ *
+ * @deprecated Use {@link SslContextBuilder} to create {@link JdkSslContext} instances and only
+ * use {@link JdkSslContext} in your code.
*/
+@Deprecated
public final class JdkSslServerContext extends JdkSslContext {
- private final SSLContext ctx;
-
/**
* Creates a new instance.
*
@@ -210,17 +212,10 @@ public JdkSslServerContext(File trustCertCollectionFile, TrustManagerFactory tru
File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, JdkApplicationProtocolNegotiator apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
- super(ciphers, cipherFilter, apn, ClientAuth.NONE);
- try {
- ctx = newSSLContext(toX509Certificates(trustCertCollectionFile), trustManagerFactory,
- toX509Certificates(keyCertChainFile), toPrivateKey(keyFile, keyPassword),
- keyPassword, keyManagerFactory, sessionCacheSize, sessionTimeout);
- } catch (Exception e) {
- if (e instanceof SSLException) {
- throw (SSLException) e;
- }
- throw new SSLException("failed to initialize the server-side SSL context", e);
- }
+ super(newSSLContext(toX509CertificatesInternal(trustCertCollectionFile), trustManagerFactory,
+ toX509CertificatesInternal(keyCertChainFile), toPrivateKeyInternal(keyFile, keyPassword),
+ keyPassword, keyManagerFactory, sessionCacheSize, sessionTimeout), false,
+ ciphers, cipherFilter, apn, ClientAuth.NONE);
}
JdkSslServerContext(X509Certificate[] trustCertCollection, TrustManagerFactory trustManagerFactory,
@@ -228,9 +223,9 @@ public JdkSslServerContext(File trustCertCollectionFile, TrustManagerFactory tru
KeyManagerFactory keyManagerFactory, Iterable<String> ciphers, CipherSuiteFilter cipherFilter,
ApplicationProtocolConfig apn, long sessionCacheSize, long sessionTimeout,
ClientAuth clientAuth) throws SSLException {
- super(ciphers, cipherFilter, toNegotiator(apn, true), clientAuth);
- ctx = newSSLContext(trustCertCollection, trustManagerFactory, keyCertChain, key,
- keyPassword, keyManagerFactory, sessionCacheSize, sessionTimeout);
+ super(newSSLContext(trustCertCollection, trustManagerFactory, keyCertChain, key,
+ keyPassword, keyManagerFactory, sessionCacheSize, sessionTimeout), false,
+ ciphers, cipherFilter, toNegotiator(apn, true), clientAuth);
}
private static SSLContext newSSLContext(X509Certificate[] trustCertCollection,
@@ -272,13 +267,4 @@ private static SSLContext newSSLContext(X509Certificate[] trustCertCollection,
}
}
- @Override
- public boolean isClient() {
- return false;
- }
-
- @Override
- public SSLContext context() {
- return ctx;
- }
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
index ee5536f3904..a38e28192dc 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
@@ -570,22 +570,6 @@ private static long newBIO(ByteBuf buffer) throws Exception {
return bio;
}
- static PrivateKey toPrivateKeyInternal(File keyFile, String keyPassword) throws SSLException {
- try {
- return SslContext.toPrivateKey(keyFile, keyPassword);
- } catch (Exception e) {
- throw new SSLException(e);
- }
- }
-
- static X509Certificate[] toX509CertificatesInternal(File file) throws SSLException {
- try {
- return SslContext.toX509Certificates(file);
- } catch (CertificateException e) {
- throw new SSLException(e);
- }
- }
-
static void checkKeyManagerFactory(KeyManagerFactory keyManagerFactory) {
if (keyManagerFactory != null) {
throw new IllegalArgumentException(
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslContext.java b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
index 16b95d1eded..b6c092610a1 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
@@ -1018,4 +1018,20 @@ static TrustManagerFactory buildTrustManagerFactory(
return trustManagerFactory;
}
+
+ static PrivateKey toPrivateKeyInternal(File keyFile, String keyPassword) throws SSLException {
+ try {
+ return toPrivateKey(keyFile, keyPassword);
+ } catch (Exception e) {
+ throw new SSLException(e);
+ }
+ }
+
+ static X509Certificate[] toX509CertificatesInternal(File file) throws SSLException {
+ try {
+ return toX509Certificates(file);
+ } catch (CertificateException e) {
+ throw new SSLException(e);
+ }
+ }
}
| null | train | train | 2016-04-08T18:44:44 | 2016-03-03T14:27:23Z | harbulot | val |
netty/netty/5122_5126 | netty/netty | netty/netty/5122 | netty/netty/5126 | [
"timestamp(timedelta=21.0, similarity=0.8675625741371196)"
] | 718bf2fa459fd6f79a3c06d835faa180ef9604a1 | bef805259105589628c1e79705d2fdededf7d257 | [
"This one may be a little trickier because the current implementation of [Native](https://github.com/netty/netty/blob/4.1/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java#L60) assumes that it will throw if the library fails to load, rather than just printing out a debug message and proceeding... | [
"e -> ignore\n",
"done.\n",
"nit: fix indentation of these params\n",
"done\n"
] | 2016-04-12T14:17:42Z | [
"feature"
] | Allow Netty-epoll use from preloaded System library | In the same vein as #5043, it would be really useful if there was some way to use Netty's epoll without it doing the library initialization itself. The fix would be pretty much the same as here: 379ad2c02ed0c0ae9f94e4081e3f910ece6380b7
cc: @nmittler
| [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java"
] | [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java"
] | [] | diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index e4ba4ce2a70..d9244314fb2 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -53,14 +53,16 @@
*/
public final class Native {
static {
- String name = SystemPropertyUtil.get("os.name").toLowerCase(Locale.UK).trim();
- if (!name.startsWith("linux")) {
- throw new IllegalStateException("Only supported on Linux");
+ try {
+ // First, try calling a side-effect free JNI method to see if the library was already
+ // loaded by the application.
+ offsetofEpollData();
+ } catch (UnsatisfiedLinkError ignore) {
+ // The library was not previously loaded, load it now.
+ loadNativeLibrary();
}
- NativeLibraryLoader.load(SystemPropertyUtil.get("io.netty.packagePrefix", "").replace('.', '-') +
- "netty-transport-native-epoll",
- PlatformDependent.getClassLoader(Native.class));
}
+
// EventLoop operations and constants
public static final int EPOLLIN = epollin();
public static final int EPOLLOUT = epollout();
@@ -250,4 +252,13 @@ public static void setTcpMd5Sig(int fd, InetAddress address, byte[] key) throws
private Native() {
// utility
}
+
+ private static void loadNativeLibrary() {
+ String name = SystemPropertyUtil.get("os.name").toLowerCase(Locale.UK).trim();
+ if (!name.startsWith("linux")) {
+ throw new IllegalStateException("Only supported on Linux");
+ }
+ NativeLibraryLoader.load(SystemPropertyUtil.get("io.netty.packagePrefix", "").replace('.', '-') +
+ "netty-transport-native-epoll", PlatformDependent.getClassLoader(Native.class));
+ }
}
| null | train | train | 2016-04-12T16:27:02 | 2016-04-11T18:32:25Z | carl-mastrangelo | val |
netty/netty/1944_5138 | netty/netty | netty/netty/1944 | netty/netty/5138 | [
"timestamp(timedelta=13774.0, similarity=0.8475972724201017)"
] | cf07f984b16d95719a9ece8c39ed6c11d8c57829 | 0f47da0eed085b0d2ed7701dd7e9a8f4f6c1ec20 | [
"Another issue is the hierarchy. We have a too many implementations to allow for optimal inlining. Seems like the JIT can handle classes with 2 impls the best (when it comes to more then one impl). See also: http://www.cliffc.org/blog/2007/11/06/clone-or-not-clone/\n",
"Let me close this...\n"
] | [
"`{@link CharSequence}`\n",
"`{@link CharSequence}`\n",
"cruft or continuation from the previous line?\n",
" Complete the task associated to this TODO comment. [ {
return this;
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ // TODO: We could optimize this for UTF8 and US_ASCII
+ return toString(index, length, charset);
+ }
+
+ @Override
+ public CharSequence readCharSequence(int length, Charset charset) {
+ CharSequence sequence = getCharSequence(readerIndex, length, charset);
+ readerIndex += length;
+ return sequence;
+ }
+
@Override
public ByteBuf setByte(int index, int value) {
checkIndex(index);
@@ -649,6 +663,23 @@ public ByteBuf setZero(int index, int length) {
return this;
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ if (charset.equals(CharsetUtil.UTF_8)) {
+ ensureWritable(ByteBufUtil.utf8MaxBytes(sequence));
+ return ByteBufUtil.writeUtf8(this, index, sequence, sequence.length());
+ }
+ if (charset.equals(CharsetUtil.US_ASCII)) {
+ int len = sequence.length();
+ ensureWritable(len);
+ return ByteBufUtil.writeAscii(this, index, sequence, len);
+ }
+ byte[] bytes = sequence.toString().getBytes(charset);
+ ensureWritable(bytes.length);
+ setBytes(index, bytes);
+ return bytes.length;
+ }
+
@Override
public byte readByte() {
checkReadableBytes0(1);
@@ -1111,6 +1142,13 @@ public ByteBuf writeZero(int length) {
return this;
}
+ @Override
+ public int writeCharSequence(CharSequence sequence, Charset charset) {
+ int written = setCharSequence(writerIndex, sequence, charset);
+ writerIndex += written;
+ return written;
+ }
+
@Override
public ByteBuf copy() {
return copy(readerIndex, readableBytes());
diff --git a/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java
index ea11606fbf9..f4858933202 100644
--- a/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareByteBuf.java
@@ -244,6 +244,12 @@ public int getBytes(int index, GatheringByteChannel out, int length) throws IOEx
return super.getBytes(index, out, length);
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getCharSequence(index, length, charset);
+ }
+
@Override
public ByteBuf setBoolean(int index, boolean value) {
recordLeakNonRefCountingOperation(leak);
@@ -352,6 +358,12 @@ public ByteBuf setZero(int index, int length) {
return super.setZero(index, length);
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setCharSequence(index, sequence, charset);
+ }
+
@Override
public boolean readBoolean() {
recordLeakNonRefCountingOperation(leak);
@@ -484,6 +496,12 @@ public int readBytes(GatheringByteChannel out, int length) throws IOException {
return super.readBytes(out, length);
}
+ @Override
+ public CharSequence readCharSequence(int length, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readCharSequence(length, charset);
+ }
+
@Override
public ByteBuf skipBytes(int length) {
recordLeakNonRefCountingOperation(leak);
@@ -844,6 +862,12 @@ public ByteBuf writeLongLE(long value) {
return super.writeLongLE(value);
}
+ @Override
+ public int writeCharSequence(CharSequence sequence, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeCharSequence(sequence, charset);
+ }
+
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
diff --git a/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java
index 70777ae70f3..87be9e6e679 100644
--- a/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/AdvancedLeakAwareCompositeByteBuf.java
@@ -226,6 +226,12 @@ public int getBytes(int index, GatheringByteChannel out, int length) throws IOEx
return super.getBytes(index, out, length);
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.getCharSequence(index, length, charset);
+ }
+
@Override
public CompositeByteBuf setBoolean(int index, boolean value) {
recordLeakNonRefCountingOperation(leak);
@@ -466,6 +472,12 @@ public int readBytes(GatheringByteChannel out, int length) throws IOException {
return super.readBytes(out, length);
}
+ @Override
+ public CharSequence readCharSequence(int length, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.readCharSequence(length, charset);
+ }
+
@Override
public CompositeByteBuf skipBytes(int length) {
recordLeakNonRefCountingOperation(leak);
@@ -580,6 +592,12 @@ public CompositeByteBuf writeZero(int length) {
return super.writeZero(length);
}
+ @Override
+ public int writeCharSequence(CharSequence sequence, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.writeCharSequence(sequence, charset);
+ }
+
@Override
public int indexOf(int fromIndex, int toIndex, byte value) {
recordLeakNonRefCountingOperation(leak);
@@ -760,6 +778,12 @@ public ByteBuf setLongLE(int index, long value) {
return super.setLongLE(index, value);
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ recordLeakNonRefCountingOperation(leak);
+ return super.setCharSequence(index, sequence, charset);
+ }
+
@Override
public short readShortLE() {
recordLeakNonRefCountingOperation(leak);
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBuf.java b/buffer/src/main/java/io/netty/buffer/ByteBuf.java
index b38abda2a62..91d1acfae30 100644
--- a/buffer/src/main/java/io/netty/buffer/ByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/ByteBuf.java
@@ -913,6 +913,17 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
*/
public abstract int getBytes(int index, FileChannel out, long position, int length) throws IOException;
+ /**
+ * Gets a {@link CharSequence} with the given length at the given index.
+ *
+ * @param length the length to read
+ * @param charset that should be used
+ * @return the sequence
+ * @throws IndexOutOfBoundsException
+ * if {@code length} is greater than {@code this.readableBytes}
+ */
+ public abstract CharSequence getCharSequence(int index, int length, Charset charset);
+
/**
* Sets the specified boolean at the specified absolute {@code index} in this
* buffer.
@@ -1247,6 +1258,19 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
*/
public abstract ByteBuf setZero(int index, int length);
+ /**
+ * Writes the specified {@link CharSequence} at the current {@code writerIndex} and increases
+ * the {@code writerIndex} by the written bytes.
+ *
+ * @param index on which the sequence should be written
+ * @param sequence to write
+ * @param charset that should be used.
+ * @return the written number of bytes.
+ * @throws IndexOutOfBoundsException
+ * if {@code this.writableBytes} is not large enough to write the whole sequence
+ */
+ public abstract int setCharSequence(int index, CharSequence sequence, Charset charset);
+
/**
* Gets a boolean at the current {@code readerIndex} and increases
* the {@code readerIndex} by {@code 1} in this buffer.
@@ -1579,6 +1603,18 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
*/
public abstract int readBytes(GatheringByteChannel out, int length) throws IOException;
+ /**
+ * Gets a {@link CharSequence} with the given length at the current {@code readerIndex}
+ * and increases the {@code readerIndex} by the given length.
+ *
+ * @param length the length to read
+ * @param charset that should be used
+ * @return the sequence
+ * @throws IndexOutOfBoundsException
+ * if {@code length} is greater than {@code this.readableBytes}
+ */
+ public abstract CharSequence readCharSequence(int length, Charset charset);
+
/**
* Transfers this buffer's data starting at the current {@code readerIndex}
* to the specified channel starting at the given file position.
@@ -1885,6 +1921,19 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
*/
public abstract ByteBuf writeZero(int length);
+ /**
+ * Writes the specified {@link CharSequence} at the current {@code writerIndex} and increases
+ * the {@code writerIndex} by the written bytes.
+ * in this buffer.
+ *
+ * @param sequence to write
+ * @param charset that should be used
+ * @return the written number of bytes
+ * @throws IndexOutOfBoundsException
+ * if {@code this.writableBytes} is not large enough to write the whole sequence
+ */
+ public abstract int writeCharSequence(CharSequence sequence, Charset charset);
+
/**
* Locates the first occurrence of the specified {@code value} in this
* buffer. The search takes place from the specified {@code fromIndex}
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
index 9588758ca23..9ad62240ee4 100644
--- a/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
+++ b/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java
@@ -382,7 +382,10 @@ public static int writeUtf8(ByteBuf buf, CharSequence seq) {
for (;;) {
if (buf instanceof AbstractByteBuf) {
- return writeUtf8((AbstractByteBuf) buf, seq, len);
+ AbstractByteBuf byteBuf = (AbstractByteBuf) buf;
+ int written = writeUtf8(byteBuf, byteBuf.writerIndex, seq, len);
+ byteBuf.writerIndex += written;
+ return written;
} else if (buf instanceof WrappedByteBuf) {
// Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path.
buf = buf.unwrap();
@@ -395,9 +398,8 @@ public static int writeUtf8(ByteBuf buf, CharSequence seq) {
}
// Fast-Path implementation
- private static int writeUtf8(AbstractByteBuf buffer, CharSequence seq, int len) {
- int oldWriterIndex = buffer.writerIndex;
- int writerIndex = oldWriterIndex;
+ static int writeUtf8(AbstractByteBuf buffer, int writerIndex, CharSequence seq, int len) {
+ int oldWriterIndex = writerIndex;
// We can use the _set methods as these not need to do any index checks and reference checks.
// This is possible as we called ensureWritable(...) before.
@@ -440,8 +442,6 @@ private static int writeUtf8(AbstractByteBuf buffer, CharSequence seq, int len)
buffer._setByte(writerIndex++, (byte) (0x80 | (c & 0x3f)));
}
}
- // update the writerIndex without any extra checks for performance reasons
- buffer.writerIndex = writerIndex;
return writerIndex - oldWriterIndex;
}
@@ -483,8 +483,10 @@ public static int writeAscii(ByteBuf buf, CharSequence seq) {
} else {
for (;;) {
if (buf instanceof AbstractByteBuf) {
- writeAscii((AbstractByteBuf) buf, seq, len);
- break;
+ AbstractByteBuf byteBuf = (AbstractByteBuf) buf;
+ int written = writeAscii(byteBuf, byteBuf.writerIndex, seq, len);
+ byteBuf.writerIndex += written;
+ return written;
} else if (buf instanceof WrappedByteBuf) {
// Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path.
buf = buf.unwrap();
@@ -497,16 +499,14 @@ public static int writeAscii(ByteBuf buf, CharSequence seq) {
}
// Fast-Path implementation
- private static void writeAscii(AbstractByteBuf buffer, CharSequence seq, int len) {
- int writerIndex = buffer.writerIndex;
+ static int writeAscii(AbstractByteBuf buffer, int writerIndex, CharSequence seq, int len) {
// We can use the _set methods as these not need to do any index checks and reference checks.
// This is possible as we called ensureWritable(...) before.
for (int i = 0; i < len; i++) {
buffer._setByte(writerIndex++, (byte) seq.charAt(i));
}
- // update the writerIndex without any extra checks for performance reasons
- buffer.writerIndex = writerIndex;
+ return len;
}
/**
diff --git a/buffer/src/main/java/io/netty/buffer/EmptyByteBuf.java b/buffer/src/main/java/io/netty/buffer/EmptyByteBuf.java
index 52cfedddac3..674e9e56ae0 100644
--- a/buffer/src/main/java/io/netty/buffer/EmptyByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/EmptyByteBuf.java
@@ -391,6 +391,12 @@ public int getBytes(int index, FileChannel out, long position, int length) {
return 0;
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ checkIndex(index, length);
+ return null;
+ }
+
@Override
public ByteBuf setBoolean(int index, boolean value) {
throw new IndexOutOfBoundsException();
@@ -509,6 +515,11 @@ public ByteBuf setZero(int index, int length) {
return checkIndex(index, length);
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ throw new IndexOutOfBoundsException();
+ }
+
@Override
public boolean readBoolean() {
throw new IndexOutOfBoundsException();
@@ -666,6 +677,12 @@ public int readBytes(FileChannel out, long position, int length) {
return 0;
}
+ @Override
+ public CharSequence readCharSequence(int length, Charset charset) {
+ checkLength(length);
+ return null;
+ }
+
@Override
public ByteBuf skipBytes(int length) {
return checkLength(length);
@@ -789,6 +806,11 @@ public ByteBuf writeZero(int length) {
return checkLength(length);
}
+ @Override
+ public int writeCharSequence(CharSequence sequence, Charset charset) {
+ throw new IndexOutOfBoundsException();
+ }
+
@Override
public int indexOf(int fromIndex, int toIndex, byte value) {
checkIndex(fromIndex);
diff --git a/buffer/src/main/java/io/netty/buffer/SlicedByteBuf.java b/buffer/src/main/java/io/netty/buffer/SlicedByteBuf.java
index a10ae864a21..64c48e2123b 100644
--- a/buffer/src/main/java/io/netty/buffer/SlicedByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/SlicedByteBuf.java
@@ -16,6 +16,7 @@
package io.netty.buffer;
import io.netty.util.ByteProcessor;
+import io.netty.util.CharsetUtil;
import java.io.IOException;
import java.io.InputStream;
@@ -25,6 +26,7 @@
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
+import java.nio.charset.Charset;
/**
* A derived buffer which exposes its parent's sub-region only. It is
@@ -264,6 +266,12 @@ public ByteBuf setByte(int index, int value) {
return this;
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ checkIndex0(index, length);
+ return buffer.getCharSequence(idx(index), length, charset);
+ }
+
@Override
protected void _setByte(int index, int value) {
unwrap().setByte(idx(index), value);
@@ -386,6 +394,23 @@ public ByteBuf setBytes(int index, ByteBuffer src) {
return this;
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ if (charset.equals(CharsetUtil.UTF_8)) {
+ checkIndex0(index, ByteBufUtil.utf8MaxBytes(sequence));
+ return ByteBufUtil.writeUtf8(this, idx(index), sequence, sequence.length());
+ }
+ if (charset.equals(CharsetUtil.US_ASCII)) {
+ int len = sequence.length();
+ checkIndex0(index, len);
+ return ByteBufUtil.writeAscii(this, idx(index), sequence, len);
+ }
+ byte[] bytes = sequence.toString().getBytes(charset);
+ checkIndex0(index, bytes.length);
+ buffer.setBytes(idx(index), bytes);
+ return bytes.length;
+ }
+
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
checkIndex0(index, length);
diff --git a/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java b/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java
index 72fec39fca4..3e9b075d21c 100644
--- a/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java
@@ -376,6 +376,11 @@ public int getBytes(int index, FileChannel out, long position, int length) throw
return buf.getBytes(index, out, position, length);
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ return buf.getCharSequence(index, length, charset);
+ }
+
@Override
public ByteBuf setBoolean(int index, boolean value) {
buf.setBoolean(index, value);
@@ -511,6 +516,11 @@ public ByteBuf setZero(int index, int length) {
return this;
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ return buf.setCharSequence(index, sequence, charset);
+ }
+
@Override
public boolean readBoolean() {
return buf.readBoolean();
@@ -673,6 +683,11 @@ public int readBytes(FileChannel out, long position, int length) throws IOExcept
return buf.readBytes(out, position, length);
}
+ @Override
+ public CharSequence readCharSequence(int length, Charset charset) {
+ return buf.readCharSequence(length, charset);
+ }
+
@Override
public ByteBuf skipBytes(int length) {
buf.skipBytes(length);
@@ -814,6 +829,11 @@ public ByteBuf writeZero(int length) {
return this;
}
+ @Override
+ public int writeCharSequence(CharSequence sequence, Charset charset) {
+ return buf.writeCharSequence(sequence, charset);
+ }
+
@Override
public int indexOf(int fromIndex, int toIndex, byte value) {
return buf.indexOf(fromIndex, toIndex, value);
diff --git a/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java b/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java
index 954d5e6ff63..0d0bec85d7c 100644
--- a/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/WrappedByteBuf.java
@@ -366,6 +366,11 @@ public int getBytes(int index, FileChannel out, long position, int length) throw
return buf.getBytes(index, out, position, length);
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ return buf.getCharSequence(index, length, charset);
+ }
+
@Override
public ByteBuf setBoolean(int index, boolean value) {
buf.setBoolean(index, value);
@@ -501,6 +506,11 @@ public ByteBuf setZero(int index, int length) {
return this;
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ return buf.setCharSequence(index, sequence, charset);
+ }
+
@Override
public boolean readBoolean() {
return buf.readBoolean();
@@ -663,6 +673,11 @@ public int readBytes(FileChannel out, long position, int length) throws IOExcept
return buf.readBytes(out, position, length);
}
+ @Override
+ public CharSequence readCharSequence(int length, Charset charset) {
+ return buf.readCharSequence(length, charset);
+ }
+
@Override
public ByteBuf skipBytes(int length) {
buf.skipBytes(length);
@@ -804,6 +819,11 @@ public ByteBuf writeZero(int length) {
return this;
}
+ @Override
+ public int writeCharSequence(CharSequence sequence, Charset charset) {
+ return buf.writeCharSequence(sequence, charset);
+ }
+
@Override
public int indexOf(int fromIndex, int toIndex, byte value) {
return buf.indexOf(fromIndex, toIndex, value);
diff --git a/codec/src/main/java/io/netty/handler/codec/ReplayingDecoderByteBuf.java b/codec/src/main/java/io/netty/handler/codec/ReplayingDecoderByteBuf.java
index e533fcc2b9f..a9c69a924bc 100644
--- a/codec/src/main/java/io/netty/handler/codec/ReplayingDecoderByteBuf.java
+++ b/codec/src/main/java/io/netty/handler/codec/ReplayingDecoderByteBuf.java
@@ -359,6 +359,12 @@ public double getDouble(int index) {
return buffer.getDouble(index);
}
+ @Override
+ public CharSequence getCharSequence(int index, int length, Charset charset) {
+ checkIndex(index, length);
+ return buffer.getCharSequence(index, length, charset);
+ }
+
@Override
public int hashCode() {
reject();
@@ -712,6 +718,12 @@ public double readDouble() {
return buffer.readDouble();
}
+ @Override
+ public CharSequence readCharSequence(int length, Charset charset) {
+ checkReadableBytes(length);
+ return buffer.readCharSequence(length, charset);
+ }
+
@Override
public ByteBuf resetReaderIndex() {
buffer.resetReaderIndex();
@@ -1114,6 +1126,18 @@ public ByteBuf writeDouble(double value) {
return this;
}
+ @Override
+ public int setCharSequence(int index, CharSequence sequence, Charset charset) {
+ reject();
+ return -1;
+ }
+
+ @Override
+ public int writeCharSequence(CharSequence sequence, Charset charset) {
+ reject();
+ return -1;
+ }
+
private void checkIndex(int index, int length) {
if (index + length > buffer.writerIndex()) {
throw REPLAY;
| null | train | train | 2016-05-01T20:30:13 | 2013-10-22T12:29:55Z | normanmaurer | val |
netty/netty/5161_5166 | netty/netty | netty/netty/5161 | netty/netty/5166 | [
"timestamp(timedelta=25.0, similarity=0.911192041254265)"
] | 38d05abd84c2971c68230289aad14066218a6141 | 13e69dd23c6c8817f8124d399e26745212b696aa | [
"thanks for reporting @alugowski \n"
] | [] | 2016-04-20T17:51:27Z | [
"feature"
] | EpollEventLoopGroup missing Executor-based constructor | EpollEventLoop is sold as a drop-in replacement for NioEventLoop (at least on Linux), but EpollEventLoopGroup only has the ThreadFactory-based constructor, not the Executor-based constructor that NioEventLoopGroup also includes.
| [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java"
] | [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java"
] | [] | diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java
index c151806c66b..6a7b3162bbf 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollEventLoopGroup.java
@@ -15,8 +15,8 @@
*/
package io.netty.channel.epoll;
-import io.netty.channel.EventLoop;
import io.netty.channel.DefaultSelectStrategyFactory;
+import io.netty.channel.EventLoop;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.MultithreadEventLoopGroup;
import io.netty.channel.SelectStrategyFactory;
@@ -50,7 +50,7 @@ public EpollEventLoopGroup(int nThreads) {
*/
@SuppressWarnings("deprecation")
public EpollEventLoopGroup(int nThreads, SelectStrategyFactory selectStrategyFactory) {
- this(nThreads, null, selectStrategyFactory);
+ this(nThreads, (ThreadFactory) null, selectStrategyFactory);
}
/**
@@ -61,6 +61,10 @@ public EpollEventLoopGroup(int nThreads, ThreadFactory threadFactory) {
this(nThreads, threadFactory, 0);
}
+ public EpollEventLoopGroup(int nThreads, Executor executor) {
+ this(nThreads, executor, DefaultSelectStrategyFactory.INSTANCE);
+ }
+
/**
* Create a new instance using the specified number of threads and the given {@link ThreadFactory}.
*/
@@ -93,6 +97,10 @@ public EpollEventLoopGroup(int nThreads, ThreadFactory threadFactory, int maxEve
super(nThreads, threadFactory, maxEventsAtOnce, selectStrategyFactory);
}
+ public EpollEventLoopGroup(int nThreads, Executor executor, SelectStrategyFactory selectStrategyFactory) {
+ super(nThreads, executor, 0, selectStrategyFactory);
+ }
+
/**
* Sets the percentage of the desired amount of time spent for I/O in the child event loops. The default value is
* {@code 50}, which means the event loop will try to spend the same amount of time for I/O as for non-I/O tasks.
| null | val | train | 2016-04-14T11:58:47 | 2016-04-18T23:03:20Z | alugowski | val |
netty/netty/5182_5189 | netty/netty | netty/netty/5182 | netty/netty/5189 | [
"timestamp(timedelta=16.0, similarity=0.8945548512951417)"
] | 9ce84dcb21168c45fe6714d3673c9253f35e5cd2 | f167b4ae22b1e6f81008a56425eeae3f4cca33b8 | [
"Should be pretty easy to contribute a fix :)\n",
"Included the unit test in the pr and added the fix for the issue\nhttps://github.com/netty/netty/pull/5189\nI will apply for clearance to participate in the project to be able to sign the CLA\n",
"Fixed by #5189\n"
] | [] | 2016-04-29T19:09:20Z | [
"defect"
] | HostsFileParser tries to resolve hostnames as case-sensitive | Netty version: 4.1.0.CR7
Context:
I am implementing a mail client in vert.x, which uses netty as network library, when trying to connect to LOCALHOST in upper case, the dns client gives an UnknownHostException.
The regular behaviour of the /etc/hosts resolution should be match the hostnames case insensitive
Steps to reproduce:
run the following unit test:
https://gist.github.com/alexlehm/e921fca3c368617be29cc7432b376927
the test using "localhost" resolves, while the test using "LOCALHOST" fails
java -version
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
Operating system: Windows 7 64-bit
I would guess the error is that the entries Map has to use hostname as case folded key here https://github.com/netty/netty/blob/4.1/resolver/src/main/java/io/netty/resolver/HostsFileParser.java#L157 and here https://github.com/netty/netty/blob/4.1/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java#L30
| [
"resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java",
"resolver/src/main/java/io/netty/resolver/HostsFileParser.java"
] | [
"resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java",
"resolver/src/main/java/io/netty/resolver/HostsFileParser.java"
] | [
"resolver/src/test/java/io/netty/resolver/DefaultHostsFileEntriesResolverTest.java",
"resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java"
] | diff --git a/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java b/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java
index 3d19042e5cc..dbf5fb9ca8e 100644
--- a/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java
+++ b/resolver/src/main/java/io/netty/resolver/DefaultHostsFileEntriesResolver.java
@@ -16,6 +16,7 @@
package io.netty.resolver;
import java.net.InetAddress;
+import java.util.Locale;
import java.util.Map;
/**
@@ -27,6 +28,6 @@ public final class DefaultHostsFileEntriesResolver implements HostsFileEntriesRe
@Override
public InetAddress address(String inetHost) {
- return entries.get(inetHost);
+ return entries.get(inetHost.toLowerCase(Locale.ENGLISH));
}
}
diff --git a/resolver/src/main/java/io/netty/resolver/HostsFileParser.java b/resolver/src/main/java/io/netty/resolver/HostsFileParser.java
index 0e8b9163c71..a17a0f7068a 100644
--- a/resolver/src/main/java/io/netty/resolver/HostsFileParser.java
+++ b/resolver/src/main/java/io/netty/resolver/HostsFileParser.java
@@ -29,6 +29,7 @@
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
+import java.util.Locale;
import java.util.HashMap;
import java.util.Map;
import java.util.regex.Pattern;
@@ -151,10 +152,11 @@ public static Map<String, InetAddress> parse(Reader reader) throws IOException {
// loop over hostname and aliases
for (int i = 1; i < lineParts.size(); i ++) {
String hostname = lineParts.get(i);
- if (!entries.containsKey(hostname)) {
+ String hostnameLower = hostname.toLowerCase(Locale.ENGLISH);
+ if (!entries.containsKey(hostnameLower)) {
// trying to map a host to multiple IPs is wrong
// only the first entry is honored
- entries.put(hostname, InetAddress.getByAddress(hostname, ipBytes));
+ entries.put(hostnameLower, InetAddress.getByAddress(hostname, ipBytes));
}
}
}
| diff --git a/resolver/src/test/java/io/netty/resolver/DefaultHostsFileEntriesResolverTest.java b/resolver/src/test/java/io/netty/resolver/DefaultHostsFileEntriesResolverTest.java
new file mode 100644
index 00000000000..7fac11b2d5a
--- /dev/null
+++ b/resolver/src/test/java/io/netty/resolver/DefaultHostsFileEntriesResolverTest.java
@@ -0,0 +1,34 @@
+/*
+ * Copyright 2016 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+/**
+ * show issue https://github.com/netty/netty/issues/5182
+ * HostsFileParser tries to resolve hostnames as case-sensitive
+ */
+package io.netty.resolver;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertNotNull;
+
+public class DefaultHostsFileEntriesResolverTest {
+
+ @Test
+ public void testLocalhost() {
+ DefaultHostsFileEntriesResolver resolver = new DefaultHostsFileEntriesResolver();
+ assertNotNull("localhost doesn't resolve", resolver.address("localhost"));
+ assertNotNull("LOCALHOST doesn't resolve", resolver.address("LOCALHOST"));
+ }
+}
diff --git a/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java b/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java
index d3261338957..bc059107b90 100644
--- a/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java
+++ b/resolver/src/test/java/io/netty/resolver/HostsFileParserTest.java
@@ -38,16 +38,20 @@ public void testParse() throws IOException {
.append("192.168.0.2 host3 #comment").append("\n") // comment after hostname
.append("192.168.0.3 host4 host5 host6").append("\n") // multiple aliases
.append("192.168.0.4 host4").append("\n") // host mapped to a second address, must be ignored
+ .append("192.168.0.5 HOST7").append("\n") // uppercase host, should match lowercase host
+ .append("192.168.0.6 host7").append("\n") // should be ignored since we have the uppercase host already
.toString();
Map<String, InetAddress> entries = HostsFileParser.parse(new BufferedReader(new StringReader(hostsString)));
- assertEquals("Expected 6 entries", 6, entries.size());
+ assertEquals("Expected 7 entries", 7, entries.size());
assertEquals("127.0.0.1", entries.get("host1").getHostAddress());
assertEquals("192.168.0.1", entries.get("host2").getHostAddress());
assertEquals("192.168.0.2", entries.get("host3").getHostAddress());
assertEquals("192.168.0.3", entries.get("host4").getHostAddress());
assertEquals("192.168.0.3", entries.get("host5").getHostAddress());
assertEquals("192.168.0.3", entries.get("host6").getHostAddress());
+ assertNotNull("uppercase host doesn't resolve", entries.get("host7"));
+ assertEquals("192.168.0.5", entries.get("host7").getHostAddress());
}
}
| train | train | 2016-05-09T22:24:46 | 2016-04-27T22:48:35Z | alexlehm | val |
Subsets and Splits
Java Code Detection in Problems
Identifies and categorizes entries in the dataset that are likely related to Java programming, providing insights into the prevalence and type of Java content in the problem statements.