instance_id stringlengths 17 39 | repo stringclasses 8 values | issue_id stringlengths 14 34 | pr_id stringlengths 14 34 | linking_methods listlengths 1 3 | base_commit stringlengths 40 40 | merge_commit stringlengths 0 40 ⌀ | hints_text listlengths 0 106 | resolved_comments listlengths 0 119 | created_at timestamp[ns, tz=UTC] | labeled_as listlengths 0 7 | problem_title stringlengths 7 174 | problem_statement stringlengths 0 55.4k | gold_files listlengths 0 10 | gold_files_postpatch listlengths 1 10 | test_files listlengths 0 60 | gold_patch stringlengths 220 5.83M | test_patch stringlengths 386 194k ⌀ | split_random stringclasses 3 values | split_time stringclasses 3 values | issue_start_time timestamp[ns] | issue_created_at timestamp[ns, tz=UTC] | issue_by_user stringlengths 3 21 | split_repo stringclasses 3 values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
netty/netty/2275_2276 | netty/netty | netty/netty/2275 | netty/netty/2276 | [
"timestamp(timedelta=83.0, similarity=0.9042778390261851)"
] | d3ffa1b02b2ba112546070c16d8786f514d6be51 | 61847d860afd98a840c340a421d1b007d2835b0b | [
"@ngocdaothanh would you mind submitting a PR ?\n",
"Sure! I wanted to confirm if my guess was right.\n",
"I've sent pull requests for branch 4.0, 4.1, and master.\n"
] | [] | 2014-02-28T21:55:52Z | [] | %s WebSocket version %s server handshake | 1.
In https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
``` java
logger.debug("%s WebSocket version %s server handshake", channel, version());
```
I guess it should be:
``` java
logger.debug("{} WebSocket version {} server handshake", channel, version());
```
2.
In https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
``` java
logger.debug("WebSocket version 07 server handshake key: {}, response: %s.", key, accept);
```
I guess it should be:
``` java
logger.debug("WebSocket version 07 server handshake key: {}, response: {}.", key, accept);
```
| [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
index f94b4f79f3e..c2ca118d351 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
@@ -156,7 +156,7 @@ public final ChannelFuture handshake(Channel channel, FullHttpRequest req,
HttpHeaders responseHeaders, final ChannelPromise promise) {
if (logger.isDebugEnabled()) {
- logger.debug("%s WebSocket version %s server handshake", channel, version());
+ logger.debug("{} WebSocket version {} server handshake", channel, version());
}
FullHttpResponse response = newHandshakeResponse(req, responseHeaders);
ChannelPipeline p = channel.pipeline();
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
index 329f823bd4e..52f461eaccc 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
@@ -111,7 +111,7 @@ protected FullHttpResponse newHandshakeResponse(FullHttpRequest req, HttpHeaders
String accept = WebSocketUtil.base64(sha1);
if (logger.isDebugEnabled()) {
- logger.debug("WebSocket version 07 server handshake key: {}, response: %s.", key, accept);
+ logger.debug("WebSocket version 07 server handshake key: {}, response: {}.", key, accept);
}
res.headers().add(Names.UPGRADE, WEBSOCKET.toLowerCase());
| null | train | train | 2014-02-27T13:28:37 | 2014-02-28T11:14:03Z | ngocdaothanh | val |
netty/netty/2275_2277 | netty/netty | netty/netty/2275 | netty/netty/2277 | [
"timestamp(timedelta=81.0, similarity=0.9042778390261851)"
] | bdedde1294590039e5ab57196053103b2024ca9f | c75f77e528881a3937e7ed5e33f1a51bdc5e0e33 | [
"@ngocdaothanh would you mind submitting a PR ?\n",
"Sure! I wanted to confirm if my guess was right.\n",
"I've sent pull requests for branch 4.0, 4.1, and master.\n"
] | [] | 2014-02-28T21:56:26Z | [] | %s WebSocket version %s server handshake | 1.
In https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
``` java
logger.debug("%s WebSocket version %s server handshake", channel, version());
```
I guess it should be:
``` java
logger.debug("{} WebSocket version {} server handshake", channel, version());
```
2.
In https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
``` java
logger.debug("WebSocket version 07 server handshake key: {}, response: %s.", key, accept);
```
I guess it should be:
``` java
logger.debug("WebSocket version 07 server handshake key: {}, response: {}.", key, accept);
```
| [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
index f94b4f79f3e..c2ca118d351 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
@@ -156,7 +156,7 @@ public final ChannelFuture handshake(Channel channel, FullHttpRequest req,
HttpHeaders responseHeaders, final ChannelPromise promise) {
if (logger.isDebugEnabled()) {
- logger.debug("%s WebSocket version %s server handshake", channel, version());
+ logger.debug("{} WebSocket version {} server handshake", channel, version());
}
FullHttpResponse response = newHandshakeResponse(req, responseHeaders);
ChannelPipeline p = channel.pipeline();
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
index 329f823bd4e..52f461eaccc 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
@@ -111,7 +111,7 @@ protected FullHttpResponse newHandshakeResponse(FullHttpRequest req, HttpHeaders
String accept = WebSocketUtil.base64(sha1);
if (logger.isDebugEnabled()) {
- logger.debug("WebSocket version 07 server handshake key: {}, response: %s.", key, accept);
+ logger.debug("WebSocket version 07 server handshake key: {}, response: {}.", key, accept);
}
res.headers().add(Names.UPGRADE, WEBSOCKET.toLowerCase());
| null | test | train | 2014-02-27T11:44:06 | 2014-02-28T11:14:03Z | ngocdaothanh | val |
netty/netty/2275_2278 | netty/netty | netty/netty/2275 | netty/netty/2278 | [
"timestamp(timedelta=44.0, similarity=0.9042778390261851)"
] | 6efac6179e1e13e6caba2cec6109ce27862efc9a | 83c8389d787e7478d725afa8791976e5fa2c4e42 | [
"@ngocdaothanh would you mind submitting a PR ?\n",
"Sure! I wanted to confirm if my guess was right.\n",
"I've sent pull requests for branch 4.0, 4.1, and master.\n"
] | [] | 2014-02-28T21:57:11Z | [] | %s WebSocket version %s server handshake | 1.
In https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
``` java
logger.debug("%s WebSocket version %s server handshake", channel, version());
```
I guess it should be:
``` java
logger.debug("{} WebSocket version {} server handshake", channel, version());
```
2.
In https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
``` java
logger.debug("WebSocket version 07 server handshake key: {}, response: %s.", key, accept);
```
I guess it should be:
``` java
logger.debug("WebSocket version 07 server handshake key: {}, response: {}.", key, accept);
```
| [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java",
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
index f94b4f79f3e..c2ca118d351 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker.java
@@ -156,7 +156,7 @@ public final ChannelFuture handshake(Channel channel, FullHttpRequest req,
HttpHeaders responseHeaders, final ChannelPromise promise) {
if (logger.isDebugEnabled()) {
- logger.debug("%s WebSocket version %s server handshake", channel, version());
+ logger.debug("{} WebSocket version {} server handshake", channel, version());
}
FullHttpResponse response = newHandshakeResponse(req, responseHeaders);
ChannelPipeline p = channel.pipeline();
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
index c2ede4a11f9..ba751f09f3e 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketServerHandshaker07.java
@@ -113,7 +113,7 @@ protected FullHttpResponse newHandshakeResponse(FullHttpRequest req, HttpHeaders
String accept = WebSocketUtil.base64(sha1);
if (logger.isDebugEnabled()) {
- logger.debug("WebSocket version 07 server handshake key: {}, response: %s.", key, accept);
+ logger.debug("WebSocket version 07 server handshake key: {}, response: {}.", key, accept);
}
res.headers().add(Names.UPGRADE, WEBSOCKET);
| null | train | train | 2014-02-27T13:56:15 | 2014-02-28T11:14:03Z | ngocdaothanh | val |
netty/netty/2282_2299 | netty/netty | netty/netty/2282 | netty/netty/2299 | [
"timestamp(timedelta=51.0, similarity=0.8715944770161461)"
] | 3c7ef6ffce51021090ac3ab5685a1968d147136a | e2ddc61b36fa65feddb6adb2c67c20bfb9c79299 | [
"I was interested in the performance of this, so I ran a little JMH benchmark (disclaimer: I don't know the first thing about benchmarking) on my laptop (too lazy to connect to Azure :)). Well, this is what I got https://gist.github.com/bucjac/9401713 and here is the code https://gist.github.com/bucjac/9401700.\n\n... | [] | 2014-03-10T19:03:41Z | [
"improvement"
] | MultithreadEventExecutorGroup.next() should use bitwise operations when possible | At the moment MultithreadEventExecutorGroup.next() uses modulo operation to calculate the index of the next EventExecutor to use. This is slow compared to bitwise operations which can be used if the array is power of two.
So if the array is power of two we should better use bitwise otherwise use modulo for best performance.
| [
"common/src/main/java/io/netty/util/concurrent/MultithreadEventExecutorGroup.java"
] | [
"common/src/main/java/io/netty/util/concurrent/MultithreadEventExecutorGroup.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/concurrent/MultithreadEventExecutorGroup.java b/common/src/main/java/io/netty/util/concurrent/MultithreadEventExecutorGroup.java
index 38efd4912ed..449598442a3 100644
--- a/common/src/main/java/io/netty/util/concurrent/MultithreadEventExecutorGroup.java
+++ b/common/src/main/java/io/netty/util/concurrent/MultithreadEventExecutorGroup.java
@@ -33,6 +33,7 @@ public abstract class MultithreadEventExecutorGroup extends AbstractEventExecuto
private final AtomicInteger childIndex = new AtomicInteger();
private final AtomicInteger terminatedChildren = new AtomicInteger();
private final Promise<?> terminationFuture = new DefaultPromise(GlobalEventExecutor.INSTANCE);
+ private final EventExecutorChooser chooser;
/**
* Create a new instance.
@@ -51,6 +52,12 @@ protected MultithreadEventExecutorGroup(int nThreads, ThreadFactory threadFactor
}
children = new SingleThreadEventExecutor[nThreads];
+ if (isPowerOfTwo(children.length)) {
+ chooser = new PowerOfTwoEventExecutorChooser();
+ } else {
+ chooser = new GenericEventExecutorChooser();
+ }
+
for (int i = 0; i < nThreads; i ++) {
boolean success = false;
try {
@@ -100,7 +107,7 @@ protected ThreadFactory newDefaultThreadFactory() {
@Override
public EventExecutor next() {
- return children[Math.abs(childIndex.getAndIncrement() % children.length)];
+ return chooser.next();
}
@Override
@@ -201,4 +208,26 @@ public boolean awaitTermination(long timeout, TimeUnit unit)
}
return isTerminated();
}
+
+ private static boolean isPowerOfTwo(int val) {
+ return (val & -val) == val;
+ }
+
+ private interface EventExecutorChooser {
+ EventExecutor next();
+ }
+
+ private final class PowerOfTwoEventExecutorChooser implements EventExecutorChooser {
+ @Override
+ public EventExecutor next() {
+ return children[childIndex.getAndIncrement() & children.length - 1];
+ }
+ }
+
+ private final class GenericEventExecutorChooser implements EventExecutorChooser {
+ @Override
+ public EventExecutor next() {
+ return children[Math.abs(childIndex.getAndIncrement() % children.length)];
+ }
+ }
}
| null | test | train | 2014-03-10T06:49:00 | 2014-03-03T11:52:31Z | normanmaurer | val |
netty/netty/2307_2310 | netty/netty | netty/netty/2307 | netty/netty/2310 | [
"timestamp(timedelta=166.0, similarity=0.8448387943046178)"
] | d4d2085377b4aa214619188335b657571c04b5f5 | 59b725b01c9145f07e78d7c280eac7ea381ca9c3 | [
"Let me check... maybe I can do something smart with an `AtomicReference`\n",
"@trustin I wonder why we even lazy start the Thread and add the task to the priority queue. Why not just do this in the constructor right away ?\n",
"A user could create relatively large executor group and then only register smaller ... | [
"`case ST_NOT_STARTED:` could also just fall-through.\n",
"Maybe like this?\n\n``` java\nfor (;;) {\n int oldState = state.get();\n if (oldState >= ST_SHUTTING_DOWN || state.compareAndSet(oldState, ST_SHUTTING_DOWN)) {\n break;\n }\n}\n```\n",
"Could we also use `PlatformDependent.newAtomicInteg... | 2014-03-13T07:06:26Z | [
"improvement"
] | Blocking in SingleThreadEventExecutor#startThread | Currently `io.netty.util.concurrent.SingleThreadEventExecutor#startThread` called every time when `SingleThreadEventExecutor#execute` is called.
This can cause unnecessary blocking because startThread contains synchronized block which called every time (performance penalty are small anyway, about 0.2% for my application).
| [
"common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java"
] | [
"common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java
index 37bf9808151..209e3beba03 100644
--- a/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java
+++ b/common/src/main/java/io/netty/util/concurrent/SingleThreadEventExecutor.java
@@ -33,6 +33,7 @@
import java.util.concurrent.Semaphore;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
/**
* Abstract base class for {@link EventExecutor}'s that execute all its submitted tasks in a single thread.
@@ -61,13 +62,12 @@ public void run() {
final Queue<ScheduledFutureTask<?>> delayedTaskQueue = new PriorityQueue<ScheduledFutureTask<?>>();
private final Thread thread;
- private final Object stateLock = new Object();
private final Semaphore threadLock = new Semaphore(0);
private final Set<Runnable> shutdownHooks = new LinkedHashSet<Runnable>();
private final boolean addTaskWakesUp;
private long lastExecutionTime;
- private volatile int state = ST_NOT_STARTED;
+ private final AtomicInteger state = new AtomicInteger(ST_NOT_STARTED);
private volatile long gracefulShutdownQuietPeriod;
private volatile long gracefulShutdownTimeout;
private long gracefulShutdownStartTime;
@@ -103,8 +103,9 @@ public void run() {
} catch (Throwable t) {
logger.warn("Unexpected exception from an event executor: ", t);
} finally {
- if (state < ST_SHUTTING_DOWN) {
- state = ST_SHUTTING_DOWN;
+ // TODO: Maybe compareAndSet ?
+ if (state.get() < ST_SHUTTING_DOWN) {
+ state.set(ST_SHUTTING_DOWN);
}
// Check if confirmShutdown() was called at the end of the loop.
@@ -126,9 +127,7 @@ public void run() {
try {
cleanup();
} finally {
- synchronized (stateLock) {
- state = ST_TERMINATED;
- }
+ state.set(ST_TERMINATED);
threadLock.release();
if (!taskQueue.isEmpty()) {
logger.warn(
@@ -414,7 +413,7 @@ protected void cleanup() {
}
protected void wakeup(boolean inEventLoop) {
- if (!inEventLoop || state == ST_SHUTTING_DOWN) {
+ if (!inEventLoop || state.get() == ST_SHUTTING_DOWN) {
taskQueue.add(WAKEUP_TASK);
}
}
@@ -498,32 +497,39 @@ public Future<?> shutdownGracefully(long quietPeriod, long timeout, TimeUnit uni
}
boolean inEventLoop = inEventLoop();
- boolean wakeup = true;
-
- synchronized (stateLock) {
+ boolean wakeup;
+ int oldState;
+ for (;;) {
if (isShuttingDown()) {
return terminationFuture();
}
-
- gracefulShutdownQuietPeriod = unit.toNanos(quietPeriod);
- gracefulShutdownTimeout = unit.toNanos(timeout);
-
+ int newState;
+ wakeup = true;
+ oldState = state.get();
if (inEventLoop) {
- assert state == ST_STARTED;
- state = ST_SHUTTING_DOWN;
+ newState = ST_SHUTTING_DOWN;
} else {
- switch (state) {
+ switch (oldState) {
case ST_NOT_STARTED:
- state = ST_SHUTTING_DOWN;
- thread.start();
+ newState = ST_SHUTTING_DOWN;
break;
case ST_STARTED:
- state = ST_SHUTTING_DOWN;
+ newState = ST_SHUTTING_DOWN;
break;
default:
+ newState = oldState;
wakeup = false;
}
}
+ if (state.compareAndSet(oldState, newState)) {
+ break;
+ }
+ }
+ gracefulShutdownQuietPeriod = unit.toNanos(quietPeriod);
+ gracefulShutdownTimeout = unit.toNanos(timeout);
+
+ if (oldState == ST_NOT_STARTED) {
+ thread.start();
}
if (wakeup) {
@@ -546,30 +552,38 @@ public void shutdown() {
}
boolean inEventLoop = inEventLoop();
- boolean wakeup = true;
-
- synchronized (stateLock) {
- if (isShutdown()) {
+ boolean wakeup;
+ int oldState;
+ for (;;) {
+ if (isShuttingDown()) {
return;
}
-
+ int newState;
+ wakeup = true;
+ oldState = state.get();
if (inEventLoop) {
- assert state == ST_STARTED || state == ST_SHUTTING_DOWN;
- state = ST_SHUTDOWN;
+ newState = ST_SHUTDOWN;
} else {
- switch (state) {
- case ST_NOT_STARTED:
- state = ST_SHUTDOWN;
- thread.start();
- break;
- case ST_STARTED:
- case ST_SHUTTING_DOWN:
- state = ST_SHUTDOWN;
- break;
- default:
- wakeup = false;
+ switch (oldState) {
+ case ST_NOT_STARTED:
+ newState = ST_SHUTDOWN;
+ break;
+ case ST_STARTED:
+ case ST_SHUTTING_DOWN:
+ newState = ST_SHUTDOWN;
+ break;
+ default:
+ newState = oldState;
+ wakeup = false;
}
}
+ if (state.compareAndSet(oldState, newState)) {
+ break;
+ }
+ }
+
+ if (oldState == ST_NOT_STARTED) {
+ thread.start();
}
if (wakeup) {
@@ -579,17 +593,17 @@ public void shutdown() {
@Override
public boolean isShuttingDown() {
- return state >= ST_SHUTTING_DOWN;
+ return state.get() >= ST_SHUTTING_DOWN;
}
@Override
public boolean isShutdown() {
- return state >= ST_SHUTDOWN;
+ return state.get() >= ST_SHUTDOWN;
}
@Override
public boolean isTerminated() {
- return state == ST_TERMINATED;
+ return state.get() == ST_TERMINATED;
}
/**
@@ -808,9 +822,8 @@ public void run() {
}
private void startThread() {
- synchronized (stateLock) {
- if (state == ST_NOT_STARTED) {
- state = ST_STARTED;
+ if (state.get() == ST_NOT_STARTED) {
+ if (state.compareAndSet(ST_NOT_STARTED, ST_STARTED)) {
delayedTaskQueue.add(new ScheduledFutureTask<Void>(
this, delayedTaskQueue, Executors.<Void>callable(new PurgeTask(), null),
ScheduledFutureTask.deadlineNanos(SCHEDULE_PURGE_INTERVAL), -SCHEDULE_PURGE_INTERVAL));
| null | val | train | 2014-03-13T06:58:36 | 2014-03-12T19:01:13Z | valodzka | val |
netty/netty/2311_2335 | netty/netty | netty/netty/2311 | netty/netty/2335 | [
"timestamp(timedelta=25226.0, similarity=0.9999999999999998)"
] | 32ccdcdb189996211bff873b8cf2cd42f66f5a5e | 7d09f077b394c7196676e7b5c9a1d4ab9d402b1f | [
"I think that it may be closed too\n"
] | [] | 2014-03-23T10:05:46Z | [
"feature"
] | Allow specifying `SelectorProvider` when constructing an NIO channel | When constructing an `NioEventLoop(Group)` a user can specify his/her preferred `SelectorProvider`, but we do not allow doing that when constructing an NIO channel. For example, you cannot specify your preferred `SelectorProvider` when constructing an `NioSocketChannel`, which makes using an alternative `SelectorProvider` practically useless.
| [
"transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannel.java",
"transport/src/main/java/io/netty/channel/socket/nio/NioServerSocketChannel.java",
"transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java"
] | [
"transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannel.java",
"transport/src/main/java/io/netty/channel/socket/nio/NioServerSocketChannel.java",
"transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannel.java b/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannel.java
index 3feef9283e1..aa71a8b7467 100644
--- a/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannel.java
+++ b/transport/src/main/java/io/netty/channel/socket/nio/NioDatagramChannel.java
@@ -61,7 +61,7 @@ public final class NioDatagramChannel
extends AbstractNioMessageChannel implements io.netty.channel.socket.DatagramChannel {
private static final ChannelMetadata METADATA = new ChannelMetadata(true);
- private static final SelectorProvider SELECTOR_PROVIDER = SelectorProvider.provider();
+ private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider();
private final DatagramChannelConfig config;
private final Map<InetAddress, List<MembershipKey>> memberships =
@@ -69,7 +69,7 @@ public final class NioDatagramChannel
private RecvByteBufAllocator.Handle allocHandle;
- private static DatagramChannel newSocket() {
+ private static DatagramChannel newSocket(SelectorProvider provider) {
try {
/**
* Use the {@link SelectorProvider} to open {@link SocketChannel} and so remove condition in
@@ -77,21 +77,21 @@ private static DatagramChannel newSocket() {
*
* See <a href="See https://github.com/netty/netty/issues/2308">#2308</a>.
*/
- return SELECTOR_PROVIDER.openDatagramChannel();
+ return provider.openDatagramChannel();
} catch (IOException e) {
throw new ChannelException("Failed to open a socket.", e);
}
}
- private static DatagramChannel newSocket(InternetProtocolFamily ipFamily) {
+ private static DatagramChannel newSocket(SelectorProvider provider, InternetProtocolFamily ipFamily) {
if (ipFamily == null) {
- return newSocket();
+ return newSocket(provider);
}
checkJavaVersion();
try {
- return SELECTOR_PROVIDER.openDatagramChannel(ProtocolFamilyConverter.convert(ipFamily));
+ return provider.openDatagramChannel(ProtocolFamilyConverter.convert(ipFamily));
} catch (IOException e) {
throw new ChannelException("Failed to open a socket.", e);
}
@@ -107,7 +107,15 @@ private static void checkJavaVersion() {
* Create a new instance which will use the Operation Systems default {@link InternetProtocolFamily}.
*/
public NioDatagramChannel() {
- this(newSocket());
+ this(newSocket(DEFAULT_SELECTOR_PROVIDER));
+ }
+
+ /**
+ * Create a new instance using the given {@link SelectorProvider}
+ * which will use the Operation Systems default {@link InternetProtocolFamily}.
+ */
+ public NioDatagramChannel(SelectorProvider provider) {
+ this(newSocket(provider));
}
/**
@@ -115,7 +123,16 @@ public NioDatagramChannel() {
* on the Operation Systems default which will be chosen.
*/
public NioDatagramChannel(InternetProtocolFamily ipFamily) {
- this(newSocket(ipFamily));
+ this(newSocket(DEFAULT_SELECTOR_PROVIDER, ipFamily));
+ }
+
+ /**
+ * Create a new instance using the given {@link SelectorProvider} and {@link InternetProtocolFamily}.
+ * If {@link InternetProtocolFamily} is {@code null} it will depend on the Operation Systems default
+ * which will be chosen.
+ */
+ public NioDatagramChannel(SelectorProvider provider, InternetProtocolFamily ipFamily) {
+ this(newSocket(provider, ipFamily));
}
/**
diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioServerSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/nio/NioServerSocketChannel.java
index 32382018715..d6751727c94 100644
--- a/transport/src/main/java/io/netty/channel/socket/nio/NioServerSocketChannel.java
+++ b/transport/src/main/java/io/netty/channel/socket/nio/NioServerSocketChannel.java
@@ -41,11 +41,11 @@ public class NioServerSocketChannel extends AbstractNioMessageChannel
implements io.netty.channel.socket.ServerSocketChannel {
private static final ChannelMetadata METADATA = new ChannelMetadata(false);
- private static final SelectorProvider SELECTOR_PROVIDER = SelectorProvider.provider();
+ private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider();
private static final InternalLogger logger = InternalLoggerFactory.getInstance(NioServerSocketChannel.class);
- private static ServerSocketChannel newSocket() {
+ private static ServerSocketChannel newSocket(SelectorProvider provider) {
try {
/**
* Use the {@link SelectorProvider} to open {@link SocketChannel} and so remove condition in
@@ -53,7 +53,7 @@ private static ServerSocketChannel newSocket() {
*
* See <a href="See https://github.com/netty/netty/issues/2308">#2308</a>.
*/
- return SELECTOR_PROVIDER.openServerSocketChannel();
+ return provider.openServerSocketChannel();
} catch (IOException e) {
throw new ChannelException(
"Failed to open a server socket.", e);
@@ -66,7 +66,14 @@ private static ServerSocketChannel newSocket() {
* Create a new instance
*/
public NioServerSocketChannel() {
- this(newSocket());
+ this(newSocket(DEFAULT_SELECTOR_PROVIDER));
+ }
+
+ /**
+ * Create a new instance using the given {@link SelectorProvider}.
+ */
+ public NioServerSocketChannel(SelectorProvider provider) {
+ this(newSocket(provider));
}
/**
diff --git a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java
index d0e046c10a4..ea45f24d09d 100644
--- a/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java
+++ b/transport/src/main/java/io/netty/channel/socket/nio/NioSocketChannel.java
@@ -44,9 +44,9 @@
public class NioSocketChannel extends AbstractNioByteChannel implements io.netty.channel.socket.SocketChannel {
private static final ChannelMetadata METADATA = new ChannelMetadata(false);
- private static final SelectorProvider SELECTOR_PROVIDER = SelectorProvider.provider();
+ private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider();
- private static SocketChannel newSocket() {
+ private static SocketChannel newSocket(SelectorProvider provider) {
try {
/**
* Use the {@link SelectorProvider} to open {@link SocketChannel} and so remove condition in
@@ -54,7 +54,7 @@ private static SocketChannel newSocket() {
*
* See <a href="See https://github.com/netty/netty/issues/2308">#2308</a>.
*/
- return SELECTOR_PROVIDER.openSocketChannel();
+ return provider.openSocketChannel();
} catch (IOException e) {
throw new ChannelException("Failed to open a socket.", e);
}
@@ -66,7 +66,14 @@ private static SocketChannel newSocket() {
* Create a new instance
*/
public NioSocketChannel() {
- this(newSocket());
+ this(newSocket(DEFAULT_SELECTOR_PROVIDER));
+ }
+
+ /**
+ * Create a new instance using the given {@link SelectorProvider}.
+ */
+ public NioSocketChannel(SelectorProvider provider) {
+ this(newSocket(provider));
}
/**
| null | val | train | 2014-03-22T22:02:19 | 2014-03-13T08:04:18Z | trustin | val |
netty/netty/2339_2340 | netty/netty | netty/netty/2339 | netty/netty/2340 | [
"timestamp(timedelta=27.0, similarity=0.9606678489503532)"
] | 76d091a8846bb5d9480c7195a3178fccbc697119 | 26d94b32712a7940a5c7b3840e71ba5fcd974cb8 | [
"We love pull-requests ;)\n\n> Am 24.03.2014 um 13:26 schrieb Alexey notifications@github.com:\n> \n> Hi\n> \n> https://github.com/netty/netty/blob/4.0/codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java\n> \n> Now we calculate headerLen for start, and some later create Cod... | [] | 2014-03-24T15:39:41Z | [
"improvement"
] | reduce memory usage in ProtobufVarint32LengthFieldPrepender.java | Hi
https://github.com/netty/netty/blob/4.0/codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java
Now we calculate headerLen for start, and some later create CodedOutputStream with code
```
CodedOutputStream headerOut =
CodedOutputStream.newInstance(new ByteBufOutputStream(out));
```
default constructor allocate 4096 buffer, but headerLen always less than 5.
More correct way it's set size of buffer for new instance with code:
```
CodedOutputStream headerOut =
CodedOutputStream.newInstance(new ByteBufOutputStream(out), headerLen);
```
this decrease allocation for header prepender from 4096 bytes to [1-5] bytes for every protobuf message
| [
"codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java"
] | [
"codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java"
] | [] | diff --git a/codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java b/codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java
index b9231781d80..9efd00ff269 100644
--- a/codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java
+++ b/codec/src/main/java/io/netty/handler/codec/protobuf/ProtobufVarint32LengthFieldPrepender.java
@@ -47,7 +47,7 @@ protected void encode(
out.ensureWritable(headerLen + bodyLen);
CodedOutputStream headerOut =
- CodedOutputStream.newInstance(new ByteBufOutputStream(out));
+ CodedOutputStream.newInstance(new ByteBufOutputStream(out), headerLen);
headerOut.writeRawVarint32(bodyLen);
headerOut.flush();
| null | val | train | 2014-03-22T15:01:49 | 2014-03-24T12:26:00Z | xhumanoid | val |
netty/netty/2346_2347 | netty/netty | netty/netty/2346 | netty/netty/2347 | [
"timestamp(timedelta=54.0, similarity=0.8885609524883314)"
] | 5bec0c352a35acc320563a14774ce9da6d78d25a | 15d9a461dc16f35dc173762a231088997aee5360 | [] | [
"Maybe we should not throw an exception but delegate to not break existing apps?\n",
"Good point, let me fix that. Thanks\n\nsöndagen den 30:e mars 2014 skrev Norman Maurer notifications@github.com:\n\n> In\n> codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfig.java:\n> \n> > @@ -228,15 +250,28 @... | 2014-03-30T07:50:18Z | [] | CORS should support a whilelist of origins | Currently the CORS support only handles a single origin, or a wildcard origin. This task should enhance Netty's CORS support to allow multiple origins to be specified.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfig.java",
"codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsHandler.java",
"example/src/main/java/io/netty/example/http/cors/HttpServerInitializer.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfig.java",
"codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsHandler.java",
"example/src/main/java/io/netty/example/http/cors/HttpServerInitializer.java"
] | [
"codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsConfigTest.java",
"codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsHandlerTest.java"
] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfig.java b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfig.java
index 5cd70046262..b7b2b0dcc0f 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfig.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsConfig.java
@@ -26,6 +26,7 @@
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
+import java.util.LinkedHashSet;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
@@ -36,7 +37,8 @@
*/
public final class CorsConfig {
- private final String origin;
+ private final Set<String> origins;
+ private final boolean anyOrigin;
private final boolean enabled;
private final Set<String> exposeHeaders;
private final boolean allowCredentials;
@@ -47,7 +49,8 @@ public final class CorsConfig {
private final Map<CharSequence, Callable<?>> preflightHeaders;
private CorsConfig(final Builder builder) {
- origin = builder.origin;
+ origins = new LinkedHashSet<String>(builder.origins);
+ anyOrigin = builder.anyOrigin;
enabled = builder.enabled;
exposeHeaders = builder.exposeHeaders;
allowCredentials = builder.allowCredentials;
@@ -67,13 +70,31 @@ public boolean isCorsSupportEnabled() {
return enabled;
}
+ /**
+ * Determines whether a wildcard origin, '*', is supported.
+ *
+ * @return {@code boolean} true if any origin is allowed.
+ */
+ public boolean isAnyOriginSupported() {
+ return anyOrigin;
+ }
+
/**
* Returns the allowed origin. This can either be a wildcard or an origin value.
*
* @return the value that will be used for the CORS response header 'Access-Control-Allow-Origin'
*/
public String origin() {
- return origin;
+ return origins.isEmpty() ? "*" : origins.iterator().next();
+ }
+
+ /**
+ * Returns the set of allowed origins.
+ *
+ * @return {@code Set} the allowed origins.
+ */
+ public Set<String> origins() {
+ return origins;
}
/**
@@ -204,7 +225,8 @@ private static <T> T getValue(final Callable<T> callable) {
@Override
public String toString() {
return StringUtil.simpleClassName(this) + "[enabled=" + enabled +
- ", origin=" + origin +
+ ", origins=" + origins +
+ ", anyOrigin=" + anyOrigin +
", exposedHeaders=" + exposeHeaders +
", isCredentialsAllowed=" + allowCredentials +
", maxAge=" + maxAge +
@@ -218,8 +240,8 @@ public String toString() {
*
* @return Builder to support method chaining.
*/
- public static Builder anyOrigin() {
- return new Builder("*");
+ public static Builder withAnyOrigin() {
+ return new Builder();
}
/**
@@ -228,15 +250,28 @@ public static Builder anyOrigin() {
* @return {@link Builder} to support method chaining.
*/
public static Builder withOrigin(final String origin) {
+ if (origin.equals("*")) {
+ return new Builder();
+ }
return new Builder(origin);
}
+ /**
+ * Creates a {@link Builder} instance with the specified origins.
+ *
+ * @return {@link Builder} to support method chaining.
+ */
+ public static Builder withOrigins(final String... origins) {
+ return new Builder(origins);
+ }
+
/**
* Builder used to configure and build a CorsConfig instance.
*/
public static class Builder {
- private final String origin;
+ private final Set<String> origins;
+ private final boolean anyOrigin;
private boolean allowNullOrigin;
private boolean enabled = true;
private boolean allowCredentials;
@@ -250,10 +285,21 @@ public static class Builder {
/**
* Creates a new Builder instance with the origin passed in.
*
- * @param origin the origin to be used for this builder.
+ * @param origins the origin to be used for this builder.
+ */
+ public Builder(final String... origins) {
+ this.origins = new LinkedHashSet<String>(Arrays.asList(origins));
+ anyOrigin = false;
+ }
+
+ /**
+ * Creates a new Builder instance allowing any origin, "*" which is the
+ * wildcard origin.
+ *
*/
- public Builder(final String origin) {
- this.origin = origin;
+ public Builder() {
+ anyOrigin = true;
+ origins = Collections.emptySet();
}
/**
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsHandler.java
index d684b3d043c..727637a37af 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsHandler.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/cors/CorsHandler.java
@@ -85,15 +85,26 @@ private boolean setOrigin(final HttpResponse response) {
final String origin = request.headers().get(ORIGIN);
if (origin != null) {
if ("null".equals(origin) && config.isNullOriginAllowed()) {
- response.headers().set(ACCESS_CONTROL_ALLOW_ORIGIN, "*");
- } else {
- response.headers().set(ACCESS_CONTROL_ALLOW_ORIGIN, config.origin());
+ setAnyOrigin(response);
+ return true;
}
- return true;
+ if (config.isAnyOriginSupported()) {
+ setAnyOrigin(response);
+ return true;
+ }
+ if (config.origins().contains(origin)) {
+ response.headers().set(ACCESS_CONTROL_ALLOW_ORIGIN, origin);
+ return true;
+ }
+ logger.debug("Request origin [" + origin + "] was not among the configured origins " + config.origins());
}
return false;
}
+ private static void setAnyOrigin(final HttpResponse response) {
+ response.headers().set(ACCESS_CONTROL_ALLOW_ORIGIN, "*");
+ }
+
private void setAllowCredentials(final HttpResponse response) {
if (config.isCredentialsAllowed()) {
response.headers().set(ACCESS_CONTROL_ALLOW_CREDENTIALS, "true");
diff --git a/example/src/main/java/io/netty/example/http/cors/HttpServerInitializer.java b/example/src/main/java/io/netty/example/http/cors/HttpServerInitializer.java
index 2297299b1d8..4e53a30917f 100644
--- a/example/src/main/java/io/netty/example/http/cors/HttpServerInitializer.java
+++ b/example/src/main/java/io/netty/example/http/cors/HttpServerInitializer.java
@@ -74,7 +74,7 @@ public class HttpServerInitializer extends ChannelInitializer<SocketChannel> {
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
- CorsConfig corsConfig = CorsConfig.anyOrigin().build();
+ CorsConfig corsConfig = CorsConfig.withAnyOrigin().build();
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("aggregator", new HttpObjectAggregator(65536));
| diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsConfigTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsConfigTest.java
index e0dfed89622..3f091825abb 100644
--- a/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsConfigTest.java
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsConfigTest.java
@@ -28,74 +28,93 @@ public class CorsConfigTest {
@Test
public void disabled() {
- final CorsConfig cors = withOrigin("*").disable().build();
+ final CorsConfig cors = withAnyOrigin().disable().build();
assertThat(cors.isCorsSupportEnabled(), is(false));
}
+ @Test
+ public void anyOrigin() {
+ final CorsConfig cors = withAnyOrigin().build();
+ assertThat(cors.isAnyOriginSupported(), is(true));
+ assertThat(cors.origin(), is("*"));
+ assertThat(cors.origins().isEmpty(), is(true));
+ }
+
@Test
public void wildcardOrigin() {
- final CorsConfig cors = anyOrigin().build();
- assertThat(cors.origin(), is(equalTo("*")));
+ final CorsConfig cors = withOrigin("*").build();
+ assertThat(cors.isAnyOriginSupported(), is(true));
+ assertThat(cors.origin(), equalTo("*"));
+ assertThat(cors.origins().isEmpty(), is(true));
}
@Test
public void origin() {
final CorsConfig cors = withOrigin("http://localhost:7888").build();
assertThat(cors.origin(), is(equalTo("http://localhost:7888")));
+ assertThat(cors.isAnyOriginSupported(), is(false));
+ }
+
+ @Test
+ public void origins() {
+ final String[] origins = {"http://localhost:7888", "https://localhost:7888"};
+ final CorsConfig cors = withOrigins(origins).build();
+ assertThat(cors.origins(), hasItems(origins));
+ assertThat(cors.isAnyOriginSupported(), is(false));
}
@Test
public void exposeHeaders() {
- final CorsConfig cors = withOrigin("*").exposeHeaders("custom-header1", "custom-header2").build();
+ final CorsConfig cors = withAnyOrigin().exposeHeaders("custom-header1", "custom-header2").build();
assertThat(cors.exposedHeaders(), hasItems("custom-header1", "custom-header2"));
}
@Test
public void allowCredentials() {
- final CorsConfig cors = withOrigin("*").allowCredentials().build();
+ final CorsConfig cors = withAnyOrigin().allowCredentials().build();
assertThat(cors.isCredentialsAllowed(), is(true));
}
@Test
public void maxAge() {
- final CorsConfig cors = withOrigin("*").maxAge(3000).build();
+ final CorsConfig cors = withAnyOrigin().maxAge(3000).build();
assertThat(cors.maxAge(), is(3000L));
}
@Test
public void requestMethods() {
- final CorsConfig cors = withOrigin("*").allowedRequestMethods(HttpMethod.POST, HttpMethod.GET).build();
+ final CorsConfig cors = withAnyOrigin().allowedRequestMethods(HttpMethod.POST, HttpMethod.GET).build();
assertThat(cors.allowedRequestMethods(), hasItems(HttpMethod.POST, HttpMethod.GET));
}
@Test
public void requestHeaders() {
- final CorsConfig cors = withOrigin("*").allowedRequestHeaders("preflight-header1", "preflight-header2").build();
+ final CorsConfig cors = withAnyOrigin().allowedRequestHeaders("preflight-header1", "preflight-header2").build();
assertThat(cors.allowedRequestHeaders(), hasItems("preflight-header1", "preflight-header2"));
}
@Test
public void preflightResponseHeadersSingleValue() {
- final CorsConfig cors = withOrigin("*").preflightResponseHeader("SingleValue", "value").build();
+ final CorsConfig cors = withAnyOrigin().preflightResponseHeader("SingleValue", "value").build();
assertThat(cors.preflightResponseHeaders().get("SingleValue"), equalTo("value"));
}
@Test
public void preflightResponseHeadersMultipleValues() {
- final CorsConfig cors = withOrigin("*").preflightResponseHeader("MultipleValues", "value1", "value2").build();
+ final CorsConfig cors = withAnyOrigin().preflightResponseHeader("MultipleValues", "value1", "value2").build();
assertThat(cors.preflightResponseHeaders().getAll("MultipleValues"), hasItems("value1", "value2"));
}
@Test
public void defaultPreflightResponseHeaders() {
- final CorsConfig cors = withOrigin("*").build();
+ final CorsConfig cors = withAnyOrigin().build();
assertThat(cors.preflightResponseHeaders().get(Names.DATE), is(notNullValue()));
assertThat(cors.preflightResponseHeaders().get(Names.CONTENT_LENGTH), is("0"));
}
@Test
public void emptyPreflightResponseHeaders() {
- final CorsConfig cors = withOrigin("*").noPreflightResponseHeaders().build();
+ final CorsConfig cors = withAnyOrigin().noPreflightResponseHeaders().build();
assertThat(cors.preflightResponseHeaders(), equalTo(HttpHeaders.EMPTY_HEADERS));
}
diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsHandlerTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsHandlerTest.java
index bda47734418..9e35c11e5bc 100644
--- a/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsHandlerTest.java
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsHandlerTest.java
@@ -39,13 +39,13 @@ public class CorsHandlerTest {
@Test
public void nonCorsRequest() {
- final HttpResponse response = simpleRequest(CorsConfig.anyOrigin().build(), null);
+ final HttpResponse response = simpleRequest(CorsConfig.withAnyOrigin().build(), null);
assertThat(response.headers().contains(ACCESS_CONTROL_ALLOW_ORIGIN), is(false));
}
@Test
public void simpleRequestWithAnyOrigin() {
- final HttpResponse response = simpleRequest(CorsConfig.anyOrigin().build(), "http://localhost:7777");
+ final HttpResponse response = simpleRequest(CorsConfig.withAnyOrigin().build(), "http://localhost:7777");
assertThat(response.headers().get(ACCESS_CONTROL_ALLOW_ORIGIN), is("*"));
}
@@ -56,6 +56,24 @@ public void simpleRequestWithOrigin() {
assertThat(response.headers().get(ACCESS_CONTROL_ALLOW_ORIGIN), is(origin));
}
+ @Test
+ public void simpleRequestWithOrigins() {
+ final String origin1 = "http://localhost:8888";
+ final String origin2 = "https://localhost:8888";
+ final String[] origins = {origin1, origin2};
+ final HttpResponse response1 = simpleRequest(CorsConfig.withOrigins(origins).build(), origin1);
+ assertThat(response1.headers().get(ACCESS_CONTROL_ALLOW_ORIGIN), is(origin1));
+ final HttpResponse response2 = simpleRequest(CorsConfig.withOrigins(origins).build(), origin2);
+ assertThat(response2.headers().get(ACCESS_CONTROL_ALLOW_ORIGIN), is(origin2));
+ }
+
+ @Test
+ public void simpleRequestWithNoMatchingOrigin() {
+ final String origin = "http://localhost:8888";
+ final HttpResponse response = simpleRequest(CorsConfig.withOrigins("https://localhost:8888").build(), origin);
+ assertThat(response.headers().get(ACCESS_CONTROL_ALLOW_ORIGIN), is(nullValue()));
+ }
+
@Test
public void preflightDeleteRequestWithCustomHeaders() {
final CorsConfig config = CorsConfig.withOrigin("http://localhost:8888")
@@ -152,7 +170,7 @@ public void preflightRequestDoNotAllowCredentials() {
@Test
public void simpleRequestCustomHeaders() {
- final CorsConfig config = CorsConfig.anyOrigin().exposeHeaders("custom1", "custom2").build();
+ final CorsConfig config = CorsConfig.withAnyOrigin().exposeHeaders("custom1", "custom2").build();
final HttpResponse response = simpleRequest(config, "http://localhost:7777", "");
assertThat(response.headers().get(ACCESS_CONTROL_ALLOW_ORIGIN), equalTo("*"));
assertThat(response.headers().getAll(ACCESS_CONTROL_EXPOSE_HEADERS), hasItems("custom1", "custom1"));
@@ -160,21 +178,21 @@ public void simpleRequestCustomHeaders() {
@Test
public void simpleRequestAllowCredentials() {
- final CorsConfig config = CorsConfig.anyOrigin().allowCredentials().build();
+ final CorsConfig config = CorsConfig.withAnyOrigin().allowCredentials().build();
final HttpResponse response = simpleRequest(config, "http://localhost:7777", "");
assertThat(response.headers().get(ACCESS_CONTROL_ALLOW_CREDENTIALS), equalTo("true"));
}
@Test
public void simpleRequestDoNotAllowCredentials() {
- final CorsConfig config = CorsConfig.anyOrigin().build();
+ final CorsConfig config = CorsConfig.withAnyOrigin().build();
final HttpResponse response = simpleRequest(config, "http://localhost:7777", "");
assertThat(response.headers().contains(ACCESS_CONTROL_ALLOW_CREDENTIALS), is(false));
}
@Test
public void simpleRequestExposeHeaders() {
- final CorsConfig config = CorsConfig.anyOrigin().exposeHeaders("one", "two").build();
+ final CorsConfig config = CorsConfig.withAnyOrigin().exposeHeaders("one", "two").build();
final HttpResponse response = simpleRequest(config, "http://localhost:7777", "");
assertThat(response.headers().getAll(ACCESS_CONTROL_EXPOSE_HEADERS), hasItems("one", "two"));
}
| test | train | 2014-03-29T20:19:06 | 2014-03-30T05:56:33Z | danbev | val |
netty/netty/2377_2391 | netty/netty | netty/netty/2377 | netty/netty/2391 | [
"timestamp(timedelta=96.0, similarity=0.882593498387067)"
] | 15d11289b0406ae2d12b5e7337701cc676791a59 | 54aead2c158ec7be704b16baaf7f86368a5d4b40 | [
"Assigned to me... all the JNI fun for me :)\n"
] | [
"Use `{@code ...}` to signify parameter names, everywhere possible.\n",
"@trustin I think I could even remove the javadocs as these are on the interfaces anyway ?\n",
"`s/ / /` - Please fix anything like this everywhere.\n"
] | 2014-04-15T07:28:50Z | [
"feature"
] | Implement epoll datagram channels | In combination with #2376, it will give us hugh performance gain.
| [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChann... | [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.j... | [
"transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramUnicastTest.java",
"transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketTestPermutation.java"
] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index 6d7c407f74e..baedc825fac 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -48,11 +48,14 @@ jfieldID readerIndexFieldId = NULL;
jfieldID writerIndexFieldId = NULL;
jfieldID memoryAddressFieldId = NULL;
jmethodID inetSocketAddrMethodId = NULL;
+jmethodID datagramSocketAddrMethodId = NULL;
jclass runtimeExceptionClass = NULL;
jclass ioExceptionClass = NULL;
jclass closedChannelExceptionClass = NULL;
jmethodID closedChannelExceptionMethodId = NULL;
jclass inetSocketAddressClass = NULL;
+jclass datagramSocketAddressClass = NULL;
+
static int socketType;
// util methods
@@ -141,6 +144,23 @@ jobject createInetSocketAddress(JNIEnv * env, struct sockaddr_storage addr) {
return socketAddr;
}
+jobject createDatagramSocketAddress(JNIEnv * env, struct sockaddr_storage addr, int len) {
+ char ipstr[INET6_ADDRSTRLEN];
+ int port;
+ if (addr.ss_family == AF_INET) {
+ struct sockaddr_in *s = (struct sockaddr_in *)&addr;
+ port = ntohs(s->sin_port);
+ inet_ntop(AF_INET, &s->sin_addr, ipstr, sizeof ipstr);
+ } else {
+ struct sockaddr_in6 *s = (struct sockaddr_in6 *)&addr;
+ port = ntohs(s->sin6_port);
+ inet_ntop(AF_INET6, &s->sin6_addr, ipstr, sizeof ipstr);
+ }
+ jstring ipString = (*env)->NewStringUTF(env, ipstr);
+ jobject socketAddr = (*env)->NewObject(env, datagramSocketAddressClass, datagramSocketAddrMethodId, ipString, port, len);
+ return socketAddr;
+}
+
void init_sockaddr(JNIEnv * env, jbyteArray address, jint scopeId, jint jport, struct sockaddr_storage * addr) {
uint16_t port = htons((uint16_t) jport);
jbyte* addressBytes = (*env)->GetByteArrayElements(env, address, 0);
@@ -175,6 +195,16 @@ static int socket_type() {
return AF_INET6;
}
}
+
+void init_in_addr(JNIEnv * env, jbyteArray address, struct in_addr * addr) {
+ jbyte* addressBytes = (*env)->GetByteArrayElements(env, address, 0);
+ if (socketType == AF_INET6) {
+ memcpy(addr, addressBytes, 16);
+ } else {
+ memcpy(addr, addressBytes + 12, 4);
+ }
+ (*env)->ReleaseByteArrayElements(env, address, addressBytes, JNI_ABORT);
+}
// util methods end
jint JNI_OnLoad(JavaVM* vm, void* reserved) {
@@ -235,6 +265,18 @@ jint JNI_OnLoad(JavaVM* vm, void* reserved) {
return JNI_ERR;
}
+ jclass localDatagramSocketAddressClass = (*env)->FindClass(env, "io/netty/channel/epoll/EpollDatagramChannel$DatagramSocketAddress");
+ if (localDatagramSocketAddressClass == NULL) {
+ // pending exception...
+ return JNI_ERR;
+ }
+ datagramSocketAddressClass = (jclass) (*env)->NewGlobalRef(env, localDatagramSocketAddressClass);
+ if (datagramSocketAddressClass == NULL) {
+ // out-of-memory!
+ throwOutOfMemoryError(env, "Error allocating memory");
+ return JNI_ERR;
+ }
+
void *mem = malloc(1);
if (mem == NULL) {
throwOutOfMemoryError(env, "Error allocating native buffer");
@@ -327,6 +369,12 @@ jint JNI_OnLoad(JavaVM* vm, void* reserved) {
}
socketType = socket_type();
+ datagramSocketAddrMethodId = (*env)->GetMethodID(env, datagramSocketAddressClass, "<init>", "(Ljava/lang/String;II)V");
+ if (datagramSocketAddrMethodId == NULL) {
+ throwRuntimeException(env, "Unable to obtain constructor of DatagramSocketAddress");
+ return JNI_ERR;
+ }
+
jclass addressEntryClass = (*env)->FindClass(env, "io/netty/channel/epoll/EpollChannelOutboundBuffer$AddressEntry");
if (addressEntryClass == NULL) {
// pending exception...
@@ -370,6 +418,9 @@ void JNI_OnUnload(JavaVM *vm, void *reserved) {
if (inetSocketAddressClass != NULL) {
(*env)->DeleteGlobalRef(env, inetSocketAddressClass);
}
+ if (datagramSocketAddressClass != NULL) {
+ (*env)->DeleteGlobalRef(env, datagramSocketAddressClass);
+ }
}
}
@@ -500,7 +551,6 @@ JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_epollCtlDel(JNIEnv * e
}
}
-
jint write0(JNIEnv * env, jclass clazz, jint fd, void *buffer, jint pos, jint limit) {
ssize_t res;
int err;
@@ -544,6 +594,86 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_writeAddress(JNIEnv *
return write0(env, clazz, fd, (void *) address, pos, limit);
}
+jint sendTo0(JNIEnv * env, jint fd, void* buffer, jint pos, jint limit ,jbyteArray address, jint scopeId, jint port) {
+ struct sockaddr_storage addr;
+ init_sockaddr(env, address, scopeId, port, &addr);
+
+ ssize_t res;
+ int err;
+ do {
+ res = sendto(fd, buffer + pos, (size_t) (limit - pos), 0, (struct sockaddr *)&addr, sizeof(struct sockaddr_storage));
+ // keep on writing if it was interrupted
+ } while(res == -1 && ((err = errno) == EINTR));
+
+ if (res < 0) {
+ // network stack saturated... try again later
+ if (err == EAGAIN || err == EWOULDBLOCK) {
+ return 0;
+ }
+ if (err == EBADF) {
+ throwClosedChannelException(env);
+ return -1;
+ }
+ throwIOException(env, exceptionMessage("Error while sendto(...): ", err));
+ return -1;
+ }
+ return (jint) res;
+}
+
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_sendTo(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit, jbyteArray address, jint scopeId, jint port) {
+ void *buffer = (*env)->GetDirectBufferAddress(env, jbuffer);
+ if (buffer == NULL) {
+ throwRuntimeException(env, "Unable to access address of buffer");
+ return -1;
+ }
+ return sendTo0(env, fd, buffer, pos, limit, address, scopeId, port);
+}
+
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_sendToAddress(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint pos, jint limit ,jbyteArray address, jint scopeId, jint port) {
+ return sendTo0(env, fd, (void*) memoryAddress, pos, limit, address, scopeId, port);
+}
+
+jobject recvFrom0(JNIEnv * env, jint fd, void* buffer, jint pos, jint limit) {
+ struct sockaddr_storage addr;
+ socklen_t addrlen = sizeof(addr);
+ ssize_t res;
+ int err;
+
+ do {
+ res = recvfrom(fd, buffer + pos, (size_t) (limit - pos), 0, (struct sockaddr *)&addr, &addrlen);
+ // Keep on reading if we was interrupted
+ } while (res == -1 && ((err = errno) == EINTR));
+
+ if (res < 0) {
+ if (err == EAGAIN || err == EWOULDBLOCK) {
+ // Nothing left to read
+ return NULL;
+ }
+ if (err == EBADF) {
+ throwClosedChannelException(env);
+ return NULL;
+ }
+ throwIOException(env, exceptionMessage("Error while recvFrom(...): ", err));
+ return NULL;
+ }
+
+ return createDatagramSocketAddress(env, addr, res);
+}
+
+JNIEXPORT jobject JNICALL Java_io_netty_channel_epoll_Native_recvFrom(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit) {
+ void *buffer = (*env)->GetDirectBufferAddress(env, jbuffer);
+ if (buffer == NULL) {
+ throwRuntimeException(env, "Unable to access address of buffer");
+ return NULL;
+ }
+
+ return recvFrom0(env, fd, buffer, pos, limit);
+}
+
+JNIEXPORT jobject JNICALL Java_io_netty_channel_epoll_Native_recvFromAddress(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit) {
+ return recvFrom0(env, fd, (void*) address, pos, limit);
+}
+
void incrementPosition(JNIEnv * env, jobject bufObj, int written) {
// Get the current position using the (*env)->GetIntField if possible and fallback
// to slower (*env)->CallIntMethod(...) if needed
@@ -714,9 +844,9 @@ JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_shutdown(JNIEnv * env,
}
}
-JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_socket(JNIEnv * env, jclass clazz) {
+jint socket0(JNIEnv * env, jclass clazz, int type) {
// TODO: Maybe also respect -Djava.net.preferIPv4Stack=true
- int fd = socket(socketType, SOCK_STREAM | SOCK_NONBLOCK, 0);
+ int fd = socket(socketType, type | SOCK_NONBLOCK, 0);
if (fd == -1) {
int err = errno;
throwIOException(env, exceptionMessage("Error creating socket: ", err));
@@ -733,6 +863,14 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_socket(JNIEnv * env, j
return fd;
}
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_socketDgram(JNIEnv * env, jclass clazz) {
+ return socket0(env, clazz, SOCK_DGRAM);
+}
+
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_socketStream(JNIEnv * env, jclass clazz) {
+ return socket0(env, clazz, SOCK_STREAM);
+}
+
JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_bind(JNIEnv * env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port) {
struct sockaddr_storage addr;
init_sockaddr(env, address, scopeId, port, &addr);
@@ -932,6 +1070,10 @@ JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_setTrafficClass(JNIEnv
setOption(env, fd, SOL_SOCKET, SO_LINGER, &solinger, sizeof(solinger));
}
+JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_setBroadcast(JNIEnv * env, jclass clazz, jint fd, jint optval) {
+ setOption(env, fd, SOL_SOCKET, SO_BROADCAST, &optval, sizeof(optval));
+}
+
JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_isReuseAddresss(JNIEnv *env, jclass clazz, jint fd) {
int optval;
if (getOption(env, fd, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof(optval)) == -1) {
@@ -991,3 +1133,12 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_getTrafficClass(JNIEnv
}
return optval;
}
+
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_isBroadcast(JNIEnv *env, jclass clazz, jint fd) {
+ int optval;
+ if (getOption(env, fd, SOL_SOCKET, SO_BROADCAST, &optval, sizeof(optval)) == -1) {
+ return -1;
+ }
+ return optval;
+}
+
diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
index 6b6943da458..75c5f833c04 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
@@ -33,12 +33,18 @@ jint Java_io_netty_channel_epoll_Native_write(JNIEnv * env, jclass clazz, jint f
jint Java_io_netty_channel_epoll_Native_writeAddress(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
jlong Java_io_netty_channel_epoll_Native_writev(JNIEnv * env, jclass clazz, jint fd, jobjectArray buffers, jint offset, jint length);
jlong Java_io_netty_channel_epoll_Native_writevAddresses(JNIEnv * env, jclass clazz, jint fd, jobjectArray addresses, jint offset, jint length);
+jint Java_io_netty_channel_epoll_Native_sendTo(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_sendToAddress(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
jint Java_io_netty_channel_epoll_Native_read(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
jint Java_io_netty_channel_epoll_Native_readAddress(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
+jobject Java_io_netty_channel_epoll_Native_recvFrom(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
+jobject Java_io_netty_channel_epoll_Native_recvFromAddress(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
void JNICALL Java_io_netty_channel_epoll_Native_close(JNIEnv * env, jclass clazz, jint fd);
void Java_io_netty_channel_epoll_Native_shutdown(JNIEnv * env, jclass clazz, jint fd, jboolean read, jboolean write);
-jint Java_io_netty_channel_epoll_Native_socket(JNIEnv * env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_socketStream(JNIEnv * env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_socketDgram(JNIEnv * env, jclass clazz);
+
void Java_io_netty_channel_epoll_Native_bind(JNIEnv * env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port);
void Java_io_netty_channel_epoll_Native_listen(JNIEnv * env, jclass clazz, jint fd, jint backlog);
jboolean Java_io_netty_channel_epoll_Native_connect(JNIEnv * env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port);
@@ -55,6 +61,7 @@ void Java_io_netty_channel_epoll_Native_setKeepAlive(JNIEnv *env, jclass clazz,
void Java_io_netty_channel_epoll_Native_setTcpCork(JNIEnv *env, jclass clazz, jint fd, jint optval);
void Java_io_netty_channel_epoll_Native_setSoLinger(JNIEnv *env, jclass clazz, jint fd, jint optval);
void Java_io_netty_channel_epoll_Native_setTrafficClass(JNIEnv *env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setBroadcast(JNIEnv *env, jclass clazz, jint fd, jint optval);
jint Java_io_netty_channel_epoll_Native_isReuseAddresss(JNIEnv *env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_isTcpNoDelay(JNIEnv *env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_getReceiveBufferSize(JNIEnv * env, jclass clazz, jint fd);
@@ -62,3 +69,4 @@ jint Java_io_netty_channel_epoll_Native_getSendBufferSize(JNIEnv *env, jclass cl
jint Java_io_netty_channel_epoll_Native_isTcpCork(JNIEnv *env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_getSoLinger(JNIEnv *env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_getTrafficClass(JNIEnv *env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_isBroadcast(JNIEnv *env, jclass clazz, jint fd);
\ No newline at end of file
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java
index 08925f24409..6ffae33bea7 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollChannel.java
@@ -17,11 +17,9 @@
import io.netty.channel.AbstractChannel;
import io.netty.channel.Channel;
-import io.netty.channel.ChannelException;
import io.netty.channel.ChannelMetadata;
import io.netty.channel.EventLoop;
-import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.channels.UnresolvedAddressException;
@@ -33,8 +31,8 @@ abstract class AbstractEpollChannel extends AbstractChannel {
volatile int fd;
int id;
- AbstractEpollChannel(int flag) {
- this(null, socketFd(), flag, false);
+ AbstractEpollChannel(int fd, int flag) {
+ this(null, fd, flag, false);
}
AbstractEpollChannel(Channel parent, int fd, int flag, boolean active) {
@@ -45,14 +43,6 @@ abstract class AbstractEpollChannel extends AbstractChannel {
this.active = active;
}
- private static int socketFd() {
- try {
- return Native.socket();
- } catch (IOException e) {
- throw new ChannelException(e);
- }
- }
-
@Override
public boolean isActive() {
return active;
@@ -120,6 +110,20 @@ protected final void clearEpollIn() {
}
}
+ protected final void setEpollOut() {
+ if ((flags & Native.EPOLLOUT) == 0) {
+ flags |= Native.EPOLLOUT;
+ ((EpollEventLoop) eventLoop()).modify(this);
+ }
+ }
+
+ protected final void clearEpollOut() {
+ if ((flags & Native.EPOLLOUT) != 0) {
+ flags &= ~Native.EPOLLOUT;
+ ((EpollEventLoop) eventLoop()).modify(this);
+ }
+ }
+
@Override
protected void doRegister() throws Exception {
EpollEventLoop loop = (EpollEventLoop) eventLoop();
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
new file mode 100644
index 00000000000..29b42653381
--- /dev/null
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
@@ -0,0 +1,468 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufHolder;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelMetadata;
+import io.netty.channel.ChannelOption;
+import io.netty.channel.ChannelOutboundBuffer;
+import io.netty.channel.ChannelPipeline;
+import io.netty.channel.ChannelPromise;
+import io.netty.channel.RecvByteBufAllocator;
+import io.netty.channel.socket.DatagramChannel;
+import io.netty.channel.socket.DatagramChannelConfig;
+import io.netty.channel.socket.DatagramPacket;
+import io.netty.util.internal.StringUtil;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.NetworkInterface;
+import java.net.SocketAddress;
+import java.net.SocketException;
+import java.nio.ByteBuffer;
+import java.nio.channels.NotYetConnectedException;
+
+/**
+ * {@link DatagramChannel} implementation that uses linux EPOLL Edge-Triggered Mode for
+ * maximal performance.
+ */
+public final class EpollDatagramChannel extends AbstractEpollChannel implements DatagramChannel {
+ private static final ChannelMetadata METADATA = new ChannelMetadata(true);
+
+ private volatile InetSocketAddress local;
+ private volatile InetSocketAddress remote;
+ private volatile boolean connected;
+ private final EpollDatagramChannelConfig config;
+
+ public EpollDatagramChannel() {
+ super(Native.socketDgramFd(), Native.EPOLLIN);
+ config = new EpollDatagramChannelConfig(this);
+ }
+
+ @Override
+ public ChannelMetadata metadata() {
+ return METADATA;
+ }
+
+ @Override
+ public boolean isActive() {
+ return fd != -1 &&
+ ((config.getOption(ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION) && isRegistered())
+ || active);
+ }
+
+ @Override
+ public boolean isConnected() {
+ return connected;
+ }
+
+ @Override
+ public ChannelFuture joinGroup(InetAddress multicastAddress) {
+ return joinGroup(multicastAddress, newPromise());
+ }
+
+ @Override
+ public ChannelFuture joinGroup(InetAddress multicastAddress, ChannelPromise promise) {
+ try {
+ return joinGroup(
+ multicastAddress,
+ NetworkInterface.getByInetAddress(localAddress().getAddress()),
+ null, promise);
+ } catch (SocketException e) {
+ promise.setFailure(e);
+ }
+ return promise;
+ }
+
+ @Override
+ public ChannelFuture joinGroup(
+ InetSocketAddress multicastAddress, NetworkInterface networkInterface) {
+ return joinGroup(multicastAddress, networkInterface, newPromise());
+ }
+
+ @Override
+ public ChannelFuture joinGroup(
+ InetSocketAddress multicastAddress, NetworkInterface networkInterface,
+ ChannelPromise promise) {
+ return joinGroup(multicastAddress.getAddress(), networkInterface, null, promise);
+ }
+
+ @Override
+ public ChannelFuture joinGroup(
+ InetAddress multicastAddress, NetworkInterface networkInterface, InetAddress source) {
+ return joinGroup(multicastAddress, networkInterface, source, newPromise());
+ }
+
+ @Override
+ public ChannelFuture joinGroup(
+ final InetAddress multicastAddress, final NetworkInterface networkInterface,
+ final InetAddress source, final ChannelPromise promise) {
+
+ if (multicastAddress == null) {
+ throw new NullPointerException("multicastAddress");
+ }
+
+ if (networkInterface == null) {
+ throw new NullPointerException("networkInterface");
+ }
+
+ promise.setFailure(new UnsupportedOperationException("Multicast not supported"));
+ return promise;
+ }
+
+ @Override
+ public ChannelFuture leaveGroup(InetAddress multicastAddress) {
+ return leaveGroup(multicastAddress, newPromise());
+ }
+
+ @Override
+ public ChannelFuture leaveGroup(InetAddress multicastAddress, ChannelPromise promise) {
+ try {
+ return leaveGroup(
+ multicastAddress, NetworkInterface.getByInetAddress(localAddress().getAddress()), null, promise);
+ } catch (SocketException e) {
+ promise.setFailure(e);
+ }
+ return promise;
+ }
+
+ @Override
+ public ChannelFuture leaveGroup(
+ InetSocketAddress multicastAddress, NetworkInterface networkInterface) {
+ return leaveGroup(multicastAddress, networkInterface, newPromise());
+ }
+
+ @Override
+ public ChannelFuture leaveGroup(
+ InetSocketAddress multicastAddress,
+ NetworkInterface networkInterface, ChannelPromise promise) {
+ return leaveGroup(multicastAddress.getAddress(), networkInterface, null, promise);
+ }
+
+ @Override
+ public ChannelFuture leaveGroup(
+ InetAddress multicastAddress, NetworkInterface networkInterface, InetAddress source) {
+ return leaveGroup(multicastAddress, networkInterface, source, newPromise());
+ }
+
+ @Override
+ public ChannelFuture leaveGroup(
+ final InetAddress multicastAddress, final NetworkInterface networkInterface, final InetAddress source,
+ final ChannelPromise promise) {
+ if (multicastAddress == null) {
+ throw new NullPointerException("multicastAddress");
+ }
+ if (networkInterface == null) {
+ throw new NullPointerException("networkInterface");
+ }
+
+ promise.setFailure(new UnsupportedOperationException("Multicast not supported"));
+
+ return promise;
+ }
+
+ /**
+ * Block the given sourceToBlock address for the given multicastAddress on the given networkInterface
+ */
+ @Override
+ public ChannelFuture block(
+ InetAddress multicastAddress, NetworkInterface networkInterface,
+ InetAddress sourceToBlock) {
+ return block(multicastAddress, networkInterface, sourceToBlock, newPromise());
+ }
+
+ /**
+ * Block the given sourceToBlock address for the given multicastAddress on the given networkInterface
+ */
+ @Override
+ public ChannelFuture block(
+ final InetAddress multicastAddress, final NetworkInterface networkInterface,
+ final InetAddress sourceToBlock, final ChannelPromise promise) {
+ if (multicastAddress == null) {
+ throw new NullPointerException("multicastAddress");
+ }
+ if (sourceToBlock == null) {
+ throw new NullPointerException("sourceToBlock");
+ }
+
+ if (networkInterface == null) {
+ throw new NullPointerException("networkInterface");
+ }
+ promise.setFailure(new UnsupportedOperationException("Multicast not supported"));
+ return promise;
+ }
+
+ /**
+ * Block the given sourceToBlock address for the given multicastAddress
+ *
+ */
+ @Override
+ public ChannelFuture block(InetAddress multicastAddress, InetAddress sourceToBlock) {
+ return block(multicastAddress, sourceToBlock, newPromise());
+ }
+
+ /**
+ * Block the given sourceToBlock address for the given multicastAddress
+ *
+ */
+ @Override
+ public ChannelFuture block(
+ InetAddress multicastAddress, InetAddress sourceToBlock, ChannelPromise promise) {
+ try {
+ return block(
+ multicastAddress,
+ NetworkInterface.getByInetAddress(localAddress().getAddress()),
+ sourceToBlock, promise);
+ } catch (Throwable e) {
+ promise.setFailure(e);
+ }
+ return promise;
+ }
+
+ @Override
+ protected AbstractEpollUnsafe newUnsafe() {
+ return new EpollDatagramChannelUnsafe();
+ }
+
+ @Override
+ protected InetSocketAddress localAddress0() {
+ return local;
+ }
+
+ @Override
+ protected InetSocketAddress remoteAddress0() {
+ return remote;
+ }
+
+ @Override
+ protected void doBind(SocketAddress localAddress) throws Exception {
+ InetSocketAddress addr = (InetSocketAddress) localAddress;
+ checkResolvable(addr);
+ Native.bind(fd, addr.getAddress(), addr.getPort());
+ local = Native.localAddress(fd);
+ active = true;
+ }
+
+ @Override
+ protected void doWrite(ChannelOutboundBuffer in) throws Exception {
+ for (;;) {
+ Object msg = in.current();
+ if (msg == null) {
+ // Wrote all messages.
+ clearEpollOut();
+ break;
+ }
+
+ boolean done = false;
+ for (int i = config().getWriteSpinCount() - 1; i >= 0; i--) {
+ if (doWriteMessage(msg)) {
+ done = true;
+ break;
+ }
+ }
+
+ if (done) {
+ in.remove();
+ } else {
+ // Did not write all messages.
+ setEpollOut();
+ break;
+ }
+ }
+ }
+
+ private boolean doWriteMessage(Object msg) throws IOException {
+ final Object m;
+ InetSocketAddress remoteAddress;
+ ByteBuf data;
+ if (msg instanceof DatagramPacket) {
+ @SuppressWarnings("unchecked")
+ DatagramPacket packet = (DatagramPacket) msg;
+ remoteAddress = packet.recipient();
+ m = packet.content();
+ } else {
+ m = msg;
+ remoteAddress = null;
+ }
+
+ if (m instanceof ByteBufHolder) {
+ data = ((ByteBufHolder) m).content();
+ } else if (m instanceof ByteBuf) {
+ data = (ByteBuf) m;
+ } else {
+ throw new UnsupportedOperationException("unsupported message type: " + StringUtil.simpleClassName(msg));
+ }
+
+ int dataLen = data.readableBytes();
+ if (dataLen == 0) {
+ return true;
+ }
+
+ if (remoteAddress == null) {
+ remoteAddress = this.remote;
+ if (remoteAddress == null) {
+ throw new NotYetConnectedException();
+ }
+ }
+
+ final int writtenBytes;
+ if (data.hasMemoryAddress()) {
+ long memoryAddress = data.memoryAddress();
+ writtenBytes = Native.sendToAddress(fd, memoryAddress, data.readerIndex(), data.writerIndex(),
+ remoteAddress.getAddress(), remoteAddress.getPort());
+ } else {
+ ByteBuffer nioData = data.internalNioBuffer(data.readerIndex(), data.readableBytes());
+ writtenBytes = Native.sendTo(fd, nioData, nioData.position(), nioData.limit(),
+ remoteAddress.getAddress(), remoteAddress.getPort());
+ }
+ return writtenBytes > 0;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig config() {
+ return config;
+ }
+
+ @Override
+ protected ChannelOutboundBuffer newOutboundBuffer() {
+ return EpollDatagramChannelOutboundBuffer.newInstance(this);
+ }
+
+ @Override
+ protected void doDisconnect() throws Exception {
+ connected = false;
+ }
+
+ final class EpollDatagramChannelUnsafe extends AbstractEpollUnsafe {
+ private RecvByteBufAllocator.Handle allocHandle;
+
+ @Override
+ public void connect(SocketAddress remote, SocketAddress local, ChannelPromise channelPromise) {
+ boolean success = false;
+ try {
+ try {
+ InetSocketAddress remoteAddress = (InetSocketAddress) remote;
+ if (local != null) {
+ InetSocketAddress localAddress = (InetSocketAddress) local;
+ doBind(localAddress);
+ }
+
+ checkResolvable(remoteAddress);
+ EpollDatagramChannel.this.remote = remoteAddress;
+ EpollDatagramChannel.this.local = Native.localAddress(fd);
+ success = true;
+ } finally {
+ if (!success) {
+ doClose();
+ } else {
+ channelPromise.setSuccess();
+ connected = true;
+ }
+ }
+ } catch (Throwable cause) {
+ channelPromise.setFailure(cause);
+ }
+ }
+
+ @Override
+ void epollInReady() {
+ DatagramChannelConfig config = config();
+ RecvByteBufAllocator.Handle allocHandle = this.allocHandle;
+ if (allocHandle == null) {
+ this.allocHandle = allocHandle = config.getRecvByteBufAllocator().newHandle();
+ }
+
+ assert eventLoop().inEventLoop();
+ final ChannelPipeline pipeline = pipeline();
+ Throwable exception = null;
+ try {
+ try {
+ for (;;) {
+ boolean free = true;
+ ByteBuf data = allocHandle.allocate(config.getAllocator());
+ int writerIndex = data.writerIndex();
+ DatagramSocketAddress remoteAddress;
+ if (data.hasMemoryAddress()) {
+ // has a memory address so use optimized call
+ remoteAddress = Native.recvFromAddress(
+ fd, data.memoryAddress(), writerIndex, data.capacity());
+ } else {
+ ByteBuffer nioData = data.internalNioBuffer(writerIndex, data.writableBytes());
+ remoteAddress = Native.recvFrom(
+ fd, nioData, nioData.position(), nioData.limit());
+ }
+
+ if (remoteAddress == null) {
+ break;
+ }
+
+ int readBytes = remoteAddress.receivedAmount;
+ data.writerIndex(data.writerIndex() + readBytes);
+ allocHandle.record(readBytes);
+ try {
+ readPending = false;
+ pipeline.fireChannelRead(
+ new DatagramPacket(data, (InetSocketAddress) localAddress(), remoteAddress));
+ free = false;
+ } catch (Throwable t) {
+ // keep on reading as we use epoll ET and need to consume everything from the socket
+ pipeline.fireChannelReadComplete();
+ pipeline.fireExceptionCaught(t);
+ } finally {
+ if (free) {
+ data.release();
+ }
+ }
+ }
+ } catch (Throwable t) {
+ exception = t;
+ }
+ pipeline.fireChannelReadComplete();
+
+ if (exception != null) {
+ pipeline.fireExceptionCaught(exception);
+ }
+ } finally {
+ // Check if there is a readPending which was not processed yet.
+ // This could be for two reasons:
+ // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
+ // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
+ //
+ // See https://github.com/netty/netty/issues/2254
+ if (!config().isAutoRead() && !readPending) {
+ clearEpollIn();
+ }
+ }
+ }
+ }
+
+ /**
+ * Act as special {@link InetSocketAddress} to be able to easily pass all needed data from JNI without the need
+ * to create more objects then needed.
+ */
+ static final class DatagramSocketAddress extends InetSocketAddress {
+ // holds the amount of received bytes
+ final int receivedAmount;
+
+ DatagramSocketAddress(String addr, int port, int receivedAmount) {
+ super(addr, port);
+ this.receivedAmount = receivedAmount;
+ }
+ }
+}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java
new file mode 100644
index 00000000000..18c754611e1
--- /dev/null
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelConfig.java
@@ -0,0 +1,281 @@
+/*
+ * Copyright 2012 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.buffer.ByteBufAllocator;
+import io.netty.channel.ChannelOption;
+import io.netty.channel.DefaultChannelConfig;
+import io.netty.channel.FixedRecvByteBufAllocator;
+import io.netty.channel.MessageSizeEstimator;
+import io.netty.channel.RecvByteBufAllocator;
+import io.netty.channel.socket.DatagramChannelConfig;
+
+import java.net.InetAddress;
+import java.net.NetworkInterface;
+import java.util.Map;
+
+public final class EpollDatagramChannelConfig extends DefaultChannelConfig implements DatagramChannelConfig {
+ private static final RecvByteBufAllocator DEFAULT_RCVBUF_ALLOCATOR = new FixedRecvByteBufAllocator(2048);
+ private final EpollDatagramChannel datagramChannel;
+ private boolean activeOnOpen;
+
+ EpollDatagramChannelConfig(EpollDatagramChannel channel) {
+ super(channel);
+ this.datagramChannel = channel;
+ setRecvByteBufAllocator(DEFAULT_RCVBUF_ALLOCATOR);
+ }
+
+ @Override
+ public Map<ChannelOption<?>, Object> getOptions() {
+ return getOptions(
+ super.getOptions(),
+ ChannelOption.SO_BROADCAST, ChannelOption.SO_RCVBUF, ChannelOption.SO_SNDBUF,
+ ChannelOption.SO_REUSEADDR, ChannelOption.IP_MULTICAST_LOOP_DISABLED,
+ ChannelOption.IP_MULTICAST_ADDR, ChannelOption.IP_MULTICAST_IF, ChannelOption.IP_MULTICAST_TTL,
+ ChannelOption.IP_TOS, ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION);
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public <T> T getOption(ChannelOption<T> option) {
+ if (option == ChannelOption.SO_BROADCAST) {
+ return (T) Boolean.valueOf(isBroadcast());
+ }
+ if (option == ChannelOption.SO_RCVBUF) {
+ return (T) Integer.valueOf(getReceiveBufferSize());
+ }
+ if (option == ChannelOption.SO_SNDBUF) {
+ return (T) Integer.valueOf(getSendBufferSize());
+ }
+ if (option == ChannelOption.SO_REUSEADDR) {
+ return (T) Boolean.valueOf(isReuseAddress());
+ }
+ if (option == ChannelOption.IP_MULTICAST_LOOP_DISABLED) {
+ return (T) Boolean.valueOf(isLoopbackModeDisabled());
+ }
+ if (option == ChannelOption.IP_MULTICAST_ADDR) {
+ T i = (T) getInterface();
+ return i;
+ }
+ if (option == ChannelOption.IP_MULTICAST_IF) {
+ T i = (T) getNetworkInterface();
+ return i;
+ }
+ if (option == ChannelOption.IP_MULTICAST_TTL) {
+ return (T) Integer.valueOf(getTimeToLive());
+ }
+ if (option == ChannelOption.IP_TOS) {
+ return (T) Integer.valueOf(getTrafficClass());
+ }
+ if (option == ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION) {
+ return (T) Boolean.valueOf(activeOnOpen);
+ }
+ return super.getOption(option);
+ }
+
+ @Override
+ public <T> boolean setOption(ChannelOption<T> option, T value) {
+ validate(option, value);
+
+ if (option == ChannelOption.SO_BROADCAST) {
+ setBroadcast((Boolean) value);
+ } else if (option == ChannelOption.SO_RCVBUF) {
+ setReceiveBufferSize((Integer) value);
+ } else if (option == ChannelOption.SO_SNDBUF) {
+ setSendBufferSize((Integer) value);
+ } else if (option == ChannelOption.SO_REUSEADDR) {
+ setReuseAddress((Boolean) value);
+ } else if (option == ChannelOption.IP_MULTICAST_LOOP_DISABLED) {
+ setLoopbackModeDisabled((Boolean) value);
+ } else if (option == ChannelOption.IP_MULTICAST_ADDR) {
+ setInterface((InetAddress) value);
+ } else if (option == ChannelOption.IP_MULTICAST_IF) {
+ setNetworkInterface((NetworkInterface) value);
+ } else if (option == ChannelOption.IP_MULTICAST_TTL) {
+ setTimeToLive((Integer) value);
+ } else if (option == ChannelOption.IP_TOS) {
+ setTrafficClass((Integer) value);
+ } else if (option == ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION) {
+ setActiveOnOpen((Boolean) value);
+ } else {
+ return super.setOption(option, value);
+ }
+
+ return true;
+ }
+
+ private void setActiveOnOpen(boolean activeOnOpen) {
+ if (channel.isRegistered()) {
+ throw new IllegalStateException("Can only changed before channel was registered");
+ }
+ this.activeOnOpen = activeOnOpen;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setMessageSizeEstimator(MessageSizeEstimator estimator) {
+ super.setMessageSizeEstimator(estimator);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setWriteBufferLowWaterMark(int writeBufferLowWaterMark) {
+ super.setWriteBufferLowWaterMark(writeBufferLowWaterMark);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setWriteBufferHighWaterMark(int writeBufferHighWaterMark) {
+ super.setWriteBufferHighWaterMark(writeBufferHighWaterMark);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setAutoClose(boolean autoClose) {
+ super.setAutoClose(autoClose);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setAutoRead(boolean autoRead) {
+ super.setAutoRead(autoRead);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setRecvByteBufAllocator(RecvByteBufAllocator allocator) {
+ super.setRecvByteBufAllocator(allocator);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setWriteSpinCount(int writeSpinCount) {
+ super.setWriteSpinCount(writeSpinCount);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setAllocator(ByteBufAllocator allocator) {
+ super.setAllocator(allocator);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setConnectTimeoutMillis(int connectTimeoutMillis) {
+ super.setConnectTimeoutMillis(connectTimeoutMillis);
+ return this;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setMaxMessagesPerRead(int maxMessagesPerRead) {
+ super.setMaxMessagesPerRead(maxMessagesPerRead);
+ return this;
+ }
+
+ @Override
+ public int getSendBufferSize() {
+ return Native.getSendBufferSize(datagramChannel.fd);
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setSendBufferSize(int sendBufferSize) {
+ Native.setSendBufferSize(datagramChannel.fd, sendBufferSize);
+ return this;
+ }
+
+ @Override
+ public int getReceiveBufferSize() {
+ return Native.getReceiveBufferSize(datagramChannel.fd);
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setReceiveBufferSize(int receiveBufferSize) {
+ Native.setReceiveBufferSize(datagramChannel.fd, receiveBufferSize);
+ return this;
+ }
+
+ @Override
+ public int getTrafficClass() {
+ return Native.getTrafficClass(datagramChannel.fd);
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setTrafficClass(int trafficClass) {
+ Native.setTrafficClass(datagramChannel.fd, trafficClass);
+ return this;
+ }
+
+ @Override
+ public boolean isReuseAddress() {
+ return Native.isReuseAddress(datagramChannel.fd) == 1;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setReuseAddress(boolean reuseAddress) {
+ Native.setReuseAddress(datagramChannel.fd, reuseAddress ? 1 : 0);
+ return this;
+ }
+
+ @Override
+ public boolean isBroadcast() {
+ return Native.isBroadcast(datagramChannel.fd) == 1;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setBroadcast(boolean broadcast) {
+ Native.setBroadcast(datagramChannel.fd, broadcast ? 1 : 0);
+ return this;
+ }
+
+ @Override
+ public boolean isLoopbackModeDisabled() {
+ return false;
+ }
+
+ @Override
+ public DatagramChannelConfig setLoopbackModeDisabled(boolean loopbackModeDisabled) {
+ throw new UnsupportedOperationException("Multicast not supported");
+ }
+
+ @Override
+ public int getTimeToLive() {
+ return -1;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setTimeToLive(int ttl) {
+ throw new UnsupportedOperationException("Multicast not supported");
+ }
+
+ @Override
+ public InetAddress getInterface() {
+ return null;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setInterface(InetAddress interfaceAddress) {
+ throw new UnsupportedOperationException("Multicast not supported");
+ }
+
+ @Override
+ public NetworkInterface getNetworkInterface() {
+ return null;
+ }
+
+ @Override
+ public EpollDatagramChannelConfig setNetworkInterface(NetworkInterface networkInterface) {
+ throw new UnsupportedOperationException("Multicast not supported");
+ }
+}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelOutboundBuffer.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelOutboundBuffer.java
new file mode 100644
index 00000000000..0d922a12d7c
--- /dev/null
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannelOutboundBuffer.java
@@ -0,0 +1,63 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelOutboundBuffer;
+import io.netty.channel.socket.DatagramPacket;
+import io.netty.util.Recycler;
+
+final class EpollDatagramChannelOutboundBuffer extends ChannelOutboundBuffer {
+ private static final Recycler<EpollDatagramChannelOutboundBuffer> RECYCLER =
+ new Recycler<EpollDatagramChannelOutboundBuffer>() {
+ @Override
+ protected EpollDatagramChannelOutboundBuffer newObject(Handle<EpollDatagramChannelOutboundBuffer> handle) {
+ return new EpollDatagramChannelOutboundBuffer(handle);
+ }
+ };
+
+ static EpollDatagramChannelOutboundBuffer newInstance(EpollDatagramChannel channel) {
+ EpollDatagramChannelOutboundBuffer buffer = RECYCLER.get();
+ buffer.channel = channel;
+ return buffer;
+ }
+
+ private EpollDatagramChannelOutboundBuffer(Recycler.Handle<EpollDatagramChannelOutboundBuffer> handle) {
+ super(handle);
+ }
+
+ @Override
+ protected Object beforeAdd(Object msg) {
+ if (msg instanceof DatagramPacket) {
+ DatagramPacket packet = (DatagramPacket) msg;
+ ByteBuf content = packet.content();
+ if (isCopyNeeded(content)) {
+ ByteBuf direct = copyToDirectByteBuf(content);
+ return new DatagramPacket(direct, packet.recipient(), packet.sender());
+ }
+ } else if (msg instanceof ByteBuf) {
+ ByteBuf buf = (ByteBuf) msg;
+ if (isCopyNeeded(buf)) {
+ msg = copyToDirectByteBuf((ByteBuf) msg);
+ }
+ }
+ return msg;
+ }
+
+ private static boolean isCopyNeeded(ByteBuf content) {
+ return !content.hasMemoryAddress() || content.nioBufferCount() != 1;
+ }
+}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannel.java
index fb851e1ab8e..f95afbb46e4 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollServerSocketChannel.java
@@ -35,7 +35,7 @@ public final class EpollServerSocketChannel extends AbstractEpollChannel impleme
private volatile InetSocketAddress local;
public EpollServerSocketChannel() {
- super(Native.EPOLLACCEPT);
+ super(Native.socketStreamFd(), Native.EPOLLACCEPT);
config = new EpollServerSocketChannelConfig(this);
}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
index e89b3a0f845..12d5d1c3659 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
@@ -76,7 +76,7 @@ public final class EpollSocketChannel extends AbstractEpollChannel implements So
}
public EpollSocketChannel() {
- super(Native.EPOLLIN);
+ super(Native.socketStreamFd(), Native.EPOLLIN);
config = new EpollSocketChannelConfig(this);
}
@@ -102,20 +102,6 @@ protected void doBind(SocketAddress local) throws Exception {
this.local = Native.localAddress(fd);
}
- private void setEpollOut() {
- if ((flags & Native.EPOLLOUT) == 0) {
- flags |= Native.EPOLLOUT;
- ((EpollEventLoop) eventLoop()).modify(this);
- }
- }
-
- private void clearEpollOut() {
- if ((flags & Native.EPOLLOUT) != 0) {
- flags &= ~Native.EPOLLOUT;
- ((EpollEventLoop) eventLoop()).modify(this);
- }
- }
-
/**
* Write bytes form the given {@link ByteBuf} to the underlying {@link java.nio.channels.Channel}.
* @param buf the {@link ByteBuf} from which the bytes should be written
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index 43f7cfeebb1..54b78fb476d 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -16,6 +16,7 @@
package io.netty.channel.epoll;
+import io.netty.channel.ChannelException;
import io.netty.channel.DefaultFileRegion;
import io.netty.channel.epoll.EpollChannelOutboundBuffer.AddressEntry;
import io.netty.util.internal.NativeLibraryLoader;
@@ -76,9 +77,10 @@ public static native long writevAddresses(int fd, AddressEntry[] addresses, int
public static native long sendfile(int dest, DefaultFileRegion src, long offset, long length) throws IOException;
- // socket operations
- public static native int socket() throws IOException;
- public static void bind(int fd, InetAddress addr, int port) throws IOException {
+ public static int sendTo(
+ int fd, ByteBuffer buf, int pos, int limit, InetAddress addr, int port) throws IOException {
+ // just duplicate the toNativeInetAddress code here to minimize object creation as this method is expected
+ // to be called frequently
byte[] address;
int scopeId;
if (addr instanceof Inet6Address) {
@@ -89,19 +91,16 @@ public static void bind(int fd, InetAddress addr, int port) throws IOException {
scopeId = 0;
address = ipv4MappedIpv6Address(addr.getAddress());
}
- bind(fd, address, scopeId, port);
+ return sendTo(fd, buf, pos, limit, address, scopeId, port);
}
- private static byte[] ipv4MappedIpv6Address(byte[] ipv4) {
- byte[] address = new byte[16];
- System.arraycopy(IPV4_MAPPED_IPV6_PREFIX, 0, address, 0, IPV4_MAPPED_IPV6_PREFIX.length);
- System.arraycopy(ipv4, 0, address, 12, ipv4.length);
- return address;
- }
+ private static native int sendTo(
+ int fd, ByteBuffer buf, int pos, int limit, byte[] address, int scopeId, int port) throws IOException;
- public static native void bind(int fd, byte[] address, int scopeId, int port) throws IOException;
- public static native void listen(int fd, int backlog) throws IOException;
- public static boolean connect(int fd, InetAddress addr, int port) throws IOException {
+ public static int sendToAddress(
+ int fd, long memoryAddress, int pos, int limit, InetAddress addr, int port) throws IOException {
+ // just duplicate the toNativeInetAddress code here to minimize object creation as this method is expected
+ // to be called frequently
byte[] address;
int scopeId;
if (addr instanceof Inet6Address) {
@@ -112,7 +111,54 @@ public static boolean connect(int fd, InetAddress addr, int port) throws IOExcep
scopeId = 0;
address = ipv4MappedIpv6Address(addr.getAddress());
}
- return connect(fd, address, scopeId, port);
+ return sendToAddress(fd, memoryAddress, pos, limit, address, scopeId, port);
+ }
+
+ private static native int sendToAddress(
+ int fd, long memoryAddress, int pos, int limit, byte[] address, int scopeId, int port) throws IOException;
+
+ public static native EpollDatagramChannel.DatagramSocketAddress recvFrom(
+ int fd, ByteBuffer buf, int pos, int limit) throws IOException;
+
+ public static native EpollDatagramChannel.DatagramSocketAddress recvFromAddress(
+ int fd, long memoryAddress, int pos, int limit) throws IOException;
+
+ // socket operations
+ public static int socketStreamFd() {
+ try {
+ return socketStream();
+ } catch (IOException e) {
+ throw new ChannelException(e);
+ }
+ }
+
+ public static int socketDgramFd() {
+ try {
+ return socketDgram();
+ } catch (IOException e) {
+ throw new ChannelException(e);
+ }
+ }
+ private static native int socketStream() throws IOException;
+ private static native int socketDgram() throws IOException;
+
+ public static void bind(int fd, InetAddress addr, int port) throws IOException {
+ NativeInetAddress address = toNativeInetAddress(addr);
+ bind(fd, address.address, address.scopeId, port);
+ }
+
+ private static byte[] ipv4MappedIpv6Address(byte[] ipv4) {
+ byte[] address = new byte[16];
+ System.arraycopy(IPV4_MAPPED_IPV6_PREFIX, 0, address, 0, IPV4_MAPPED_IPV6_PREFIX.length);
+ System.arraycopy(ipv4, 0, address, 12, ipv4.length);
+ return address;
+ }
+
+ public static native void bind(int fd, byte[] address, int scopeId, int port) throws IOException;
+ public static native void listen(int fd, int backlog) throws IOException;
+ public static boolean connect(int fd, InetAddress addr, int port) throws IOException {
+ NativeInetAddress address = toNativeInetAddress(addr);
+ return connect(fd, address.address, address.scopeId, port);
}
public static native boolean connect(int fd, byte[] address, int scopeId, int port) throws IOException;
public static native boolean finishConnect(int fd) throws IOException;
@@ -131,6 +177,7 @@ public static boolean connect(int fd, InetAddress addr, int port) throws IOExcep
public static native int isTcpCork(int fd);
public static native int getSoLinger(int fd);
public static native int getTrafficClass(int fd);
+ public static native int isBroadcast(int fd);
public static native void setKeepAlive(int fd, int keepAlive);
public static native void setReceiveBufferSize(int fd, int receiveBufferSize);
@@ -140,6 +187,31 @@ public static boolean connect(int fd, InetAddress addr, int port) throws IOExcep
public static native void setTcpCork(int fd, int tcpCork);
public static native void setSoLinger(int fd, int soLinger);
public static native void setTrafficClass(int fd, int tcpNoDelay);
+ public static native void setBroadcast(int fd, int broadcast);
+
+ private static NativeInetAddress toNativeInetAddress(InetAddress addr) {
+ byte[] bytes = addr.getAddress();
+ if (addr instanceof Inet6Address) {
+ return new NativeInetAddress(bytes, ((Inet6Address) addr).getScopeId());
+ } else {
+ // convert to ipv4 mapped ipv6 address;
+ return new NativeInetAddress(ipv4MappedIpv6Address(bytes));
+ }
+ }
+
+ private static class NativeInetAddress {
+ final byte[] address;
+ final int scopeId;
+
+ NativeInetAddress(byte[] address, int scopeId) {
+ this.address = address;
+ this.scopeId = scopeId;
+ }
+
+ NativeInetAddress(byte[] address) {
+ this(address, 0);
+ }
+ }
private Native() {
// utility
| diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramUnicastTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramUnicastTest.java
new file mode 100644
index 00000000000..610cfb5d851
--- /dev/null
+++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollDatagramUnicastTest.java
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.testsuite.transport.TestsuitePermutation;
+import io.netty.testsuite.transport.socket.DatagramUnicastTest;
+
+import java.util.List;
+
+public class EpollDatagramUnicastTest extends DatagramUnicastTest {
+ @Override
+ protected List<TestsuitePermutation.BootstrapComboFactory<Bootstrap, Bootstrap>> newFactories() {
+ return EpollSocketTestPermutation.INSTANCE.datagram();
+ }
+}
diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketTestPermutation.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketTestPermutation.java
index a452a956e4d..85cb022a6dd 100644
--- a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketTestPermutation.java
+++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketTestPermutation.java
@@ -16,8 +16,12 @@
package io.netty.channel.epoll;
import io.netty.bootstrap.Bootstrap;
+import io.netty.bootstrap.ChannelFactory;
import io.netty.bootstrap.ServerBootstrap;
+import io.netty.channel.Channel;
import io.netty.channel.EventLoopGroup;
+import io.netty.channel.socket.InternetProtocolFamily;
+import io.netty.channel.socket.nio.NioDatagramChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.testsuite.transport.TestsuitePermutation;
@@ -85,4 +89,34 @@ public Bootstrap newInstance() {
}
);
}
+
+ @Override
+ public List<TestsuitePermutation.BootstrapComboFactory<Bootstrap, Bootstrap>> datagram() {
+ // Make the list of Bootstrap factories.
+ List<BootstrapFactory<Bootstrap>> bfs = Arrays.asList(
+ new BootstrapFactory<Bootstrap>() {
+ @Override
+ public Bootstrap newInstance() {
+ return new Bootstrap().group(nioWorkerGroup).channelFactory(new ChannelFactory<Channel>() {
+ @Override
+ public Channel newChannel() {
+ return new NioDatagramChannel(InternetProtocolFamily.IPv4);
+ }
+
+ @Override
+ public String toString() {
+ return NioDatagramChannel.class.getSimpleName() + ".class";
+ }
+ });
+ }
+ },
+ new BootstrapFactory<Bootstrap>() {
+ @Override
+ public Bootstrap newInstance() {
+ return new Bootstrap().group(epollWorkerGroup).channel(EpollDatagramChannel.class);
+ }
+ }
+ );
+ return combo(bfs, bfs);
+ }
}
| train | train | 2014-04-15T07:03:13 | 2014-04-11T06:09:44Z | trustin | val |
netty/netty/2402_2403 | netty/netty | netty/netty/2402 | netty/netty/2403 | [
"timestamp(timedelta=27.0, similarity=0.9437644332585227)"
] | c66aae3539739c4cf4ef049f81b7b545491eb3d7 | 1ccdb6f4701174d429ed8cf31a159d62cd4802c5 | [
"There are a couple places where client/server context is important, so it might end up propagating itself anyway.\n\nFor example, push_promise setting should only be sent by the client, push promises should only be sent by the server. client vs server stream ids (already handled I think).\n\nThat said, I don't min... | [
"This is to prevent control from passing on if we're still reading the preface. Could probably change this to checking if any bytes are left in the incoming ByteBuf that haven't been consumed yet.\n",
"Note! Advice wanted.\n",
"nit accidental wildcard\n",
"IntelliJ keeps doing this automatically no matter w... | 2014-04-16T22:47:40Z | [] | HTTP2 server incorrectly sends preface string | We discovered this problem when we found the OkHttp client couldn't talk to the netty server.
https://tools.ietf.org/html/draft-ietf-httpbis-http2-10#section-3.5
The wording of the spec is slightly confusing, but only the client sends the magic preface string that begins with "PRI *".
The server connection header consists of just a SETTINGS frame
(Section 6.5) that MUST be the first frame the server sends in the
HTTP/2 connection.
I'm happy to tackle this but would like to discuss a couple of alternatives.
1) The simple solution is to wire a "boolean server" down through to Http2FrameEncoder and Http2FrameDecoder so they know whether they are client or server. I hacked this together and it's straightforward, but kind of ugly.
2) The "nicer"(?) solution might be to split out the preface sending / receiving feature from Http2FrameEncoder and Http2FrameDecoder entirely, in favor of an explicit Http2PrefaceWriter and Http2PrefaceReader you add to the pipeline, which do their thing and then remove themselves.
Thoughts?
@normanmaurer @nmittler @adriancole @jh
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/Http2OrHttpChooser.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/connection/Http2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2FrameDecoder.java",
"codec-http2... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/Http2OrHttpChooser.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/connection/Http2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2FrameDecoder.java",
"codec-http2... | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/Http2OrHttpChooser.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/Http2OrHttpChooser.java
index db51b7c8572..765a09c026e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/Http2OrHttpChooser.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/Http2OrHttpChooser.java
@@ -25,6 +25,7 @@
import io.netty.handler.codec.http.HttpResponseEncoder;
import io.netty.handler.codec.http2.draft10.connection.Http2ConnectionHandler;
import io.netty.handler.codec.http2.draft10.frame.Http2FrameCodec;
+import io.netty.handler.codec.http2.draft10.frame.decoder.Http2ServerPrefaceReader;
import io.netty.handler.ssl.SslHandler;
import javax.net.ssl.SSLEngine;
@@ -129,6 +130,7 @@ private boolean initPipeline(ChannelHandlerContext ctx) {
*/
protected void addHttp2Handlers(ChannelHandlerContext ctx) {
ChannelPipeline pipeline = ctx.pipeline();
+ pipeline.addLast("http2ServerPrefaceReader", new Http2ServerPrefaceReader());
pipeline.addLast("http2FrameCodec", new Http2FrameCodec());
pipeline.addLast("http2ConnectionHandler", new Http2ConnectionHandler(true));
pipeline.addLast("http2RequestHandler", createHttp2RequestHandler());
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/connection/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/connection/Http2ConnectionHandler.java
index 43edec66490..ad01a45eb55 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/connection/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/connection/Http2ConnectionHandler.java
@@ -15,16 +15,6 @@
package io.netty.handler.codec.http2.draft10.connection;
-import static io.netty.handler.codec.http2.draft10.Http2Error.PROTOCOL_ERROR;
-import static io.netty.handler.codec.http2.draft10.Http2Error.STREAM_CLOSED;
-import static io.netty.handler.codec.http2.draft10.Http2Exception.format;
-import static io.netty.handler.codec.http2.draft10.Http2Exception.protocolError;
-import static io.netty.handler.codec.http2.draft10.connection.Http2ConnectionUtil.toHttp2Exception;
-import static io.netty.handler.codec.http2.draft10.connection.Http2Stream.State.HALF_CLOSED_LOCAL;
-import static io.netty.handler.codec.http2.draft10.connection.Http2Stream.State.HALF_CLOSED_REMOTE;
-import static io.netty.handler.codec.http2.draft10.connection.Http2Stream.State.OPEN;
-import static io.netty.handler.codec.http2.draft10.connection.Http2Stream.State.RESERVED_LOCAL;
-import static io.netty.handler.codec.http2.draft10.connection.Http2Stream.State.RESERVED_REMOTE;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerAdapter;
@@ -48,6 +38,11 @@
import io.netty.handler.codec.http2.draft10.frame.Http2WindowUpdateFrame;
import io.netty.util.ReferenceCountUtil;
+import static io.netty.handler.codec.http2.draft10.Http2Error.*;
+import static io.netty.handler.codec.http2.draft10.Http2Exception.*;
+import static io.netty.handler.codec.http2.draft10.connection.Http2ConnectionUtil.*;
+import static io.netty.handler.codec.http2.draft10.connection.Http2Stream.State.*;
+
/**
* Handler for HTTP/2 connection state. Manages inbound and outbound flow control for data frames.
* Handles error conditions as defined by the HTTP/2 spec and controls appropriate shutdown of the
@@ -153,7 +148,7 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E
processHttp2Exception(ctx, (Http2Exception) cause);
}
- ctx.fireExceptionCaught(cause);
+ super.exceptionCaught(ctx, cause);
}
@Override
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2FrameDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2FrameDecoder.java
index 73694c8e461..cd81870d77d 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2FrameDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2FrameDecoder.java
@@ -15,12 +15,6 @@
package io.netty.handler.codec.http2.draft10.frame.decoder;
-import static io.netty.handler.codec.http2.draft10.Http2Error.PROTOCOL_ERROR;
-import static io.netty.handler.codec.http2.draft10.Http2Exception.format;
-import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.FRAME_HEADER_LENGTH;
-import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.FRAME_LENGTH_MASK;
-import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.connectionPrefaceBuf;
-import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.readUnsignedInt;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;
@@ -31,6 +25,8 @@
import java.util.List;
+import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.*;
+
/**
* Decodes {@link Http2Frame} objects from an input {@link ByteBuf}. The frames that this handler
* emits can be configured by providing a {@link Http2FrameUnmarshaller}. By default, the
@@ -42,14 +38,12 @@
public class Http2FrameDecoder extends ByteToMessageDecoder {
private enum State {
- PREFACE,
FRAME_HEADER,
FRAME_PAYLOAD,
ERROR
}
private final Http2FrameUnmarshaller frameUnmarshaller;
- private final ByteBuf preface;
private State state;
private int payloadLength;
@@ -62,23 +56,13 @@ public Http2FrameDecoder(Http2FrameUnmarshaller frameUnmarshaller) {
throw new NullPointerException("frameUnmarshaller");
}
this.frameUnmarshaller = frameUnmarshaller;
- preface = connectionPrefaceBuf();
- state = State.PREFACE;
+ state = State.FRAME_HEADER;
}
@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
try {
switch (state) {
- case PREFACE:
- processHttp2Preface(ctx, in);
- if (state == State.PREFACE) {
- // Still processing the preface.
- break;
- }
-
- // Successfully processed the HTTP2 preface.
-
case FRAME_HEADER:
processFrameHeader(in);
if (state == State.FRAME_HEADER) {
@@ -104,30 +88,6 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t
}
}
- private void processHttp2Preface(ChannelHandlerContext ctx, ByteBuf in) throws Http2Exception {
- int prefaceRemaining = preface.readableBytes();
- int bytesRead = Math.min(in.readableBytes(), prefaceRemaining);
-
- // Read the portion of the input up to the length of the preface, if reached.
- ByteBuf sourceSlice = in.readSlice(bytesRead);
-
- // Read the same number of bytes from the preface buffer.
- ByteBuf prefaceSlice = preface.readSlice(bytesRead);
-
- // If the input so far doesn't match the preface, break the connection.
- if (bytesRead == 0 || !prefaceSlice.equals(sourceSlice)) {
- throw format(PROTOCOL_ERROR, "Invalid HTTP2 preface");
- }
-
- if ((prefaceRemaining - bytesRead) > 0) {
- // Wait until the entire preface has arrived.
- return;
- }
-
- // Start processing the first header.
- state = State.FRAME_HEADER;
- }
-
private void processFrameHeader(ByteBuf in) throws Http2Exception {
if (in.readableBytes() < FRAME_HEADER_LENGTH) {
// Wait until the entire frame header has been read.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2ServerPrefaceReader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2ServerPrefaceReader.java
new file mode 100644
index 00000000000..582eda39805
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/decoder/Http2ServerPrefaceReader.java
@@ -0,0 +1,72 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package io.netty.handler.codec.http2.draft10.frame.decoder;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerAdapter;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.http2.draft10.Http2Exception;
+
+import static io.netty.handler.codec.http2.draft10.Http2Error.*;
+import static io.netty.handler.codec.http2.draft10.Http2Exception.*;
+import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.*;
+
+/**
+ * Reads the initial client preface, then removes itself from the pipeline.
+ * Only the server pipeline should do this.
+ *
+ * https://tools.ietf.org/html/draft-ietf-httpbis-http2-10#section-3.5
+ */
+public class Http2ServerPrefaceReader extends ChannelHandlerAdapter {
+
+ private final ByteBuf preface = connectionPrefaceBuf();
+
+ @Override
+ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
+ if (preface.isReadable() && msg instanceof ByteBuf) {
+ ByteBuf buf = (ByteBuf) msg;
+ processHttp2Preface(ctx, buf);
+ if (preface.isReadable()) {
+ // More preface left to process.
+ buf.release();
+ return;
+ }
+ }
+ super.channelRead(ctx, msg);
+ }
+
+ private void processHttp2Preface(ChannelHandlerContext ctx, ByteBuf in) throws Http2Exception {
+ int prefaceRemaining = preface.readableBytes();
+ int bytesRead = Math.min(in.readableBytes(), prefaceRemaining);
+
+ // Read the portion of the input up to the length of the preface, if reached.
+ ByteBuf sourceSlice = in.readSlice(bytesRead);
+
+ // Read the same number of bytes from the preface buffer.
+ ByteBuf prefaceSlice = preface.readSlice(bytesRead);
+
+ // If the input so far doesn't match the preface, break the connection.
+ if (bytesRead == 0 || !prefaceSlice.equals(sourceSlice)) {
+ throw format(PROTOCOL_ERROR, "Invalid HTTP2 preface");
+ }
+
+ if (!preface.isReadable()) {
+ // Entire preface has been read, remove ourselves from the pipeline.
+ ctx.pipeline().remove(this);
+ }
+ }
+
+}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/encoder/Http2ClientPrefaceWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/encoder/Http2ClientPrefaceWriter.java
new file mode 100644
index 00000000000..fcb7745a91d
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/encoder/Http2ClientPrefaceWriter.java
@@ -0,0 +1,73 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package io.netty.handler.codec.http2.draft10.frame.encoder;
+
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelFutureListener;
+import io.netty.channel.ChannelHandlerAdapter;
+import io.netty.channel.ChannelHandlerContext;
+
+import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.*;
+
+/**
+ * Sends the initial client preface, then removes itself from the pipeline.
+ * Only the client pipeline should do this.
+ *
+ * https://tools.ietf.org/html/draft-ietf-httpbis-http2-10#section-3.5
+ */
+public class Http2ClientPrefaceWriter extends ChannelHandlerAdapter {
+
+ private boolean prefaceWritten;
+
+ public Http2ClientPrefaceWriter() {
+ }
+
+ @Override
+ public void channelActive(ChannelHandlerContext ctx) throws Exception {
+ // The channel just became active - send the HTTP2 connection preface to the remote
+ // endpoint.
+ sendPreface(ctx);
+
+ super.channelActive(ctx);
+ }
+
+ @Override
+ public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+ // This handler was just added to the context. In case it was handled after
+ // the connection became active, send the HTTP2 connection preface now.
+ sendPreface(ctx);
+ }
+
+ /**
+ * Sends the HTTP2 connection preface to the remote endpoint, if not already sent.
+ */
+ private void sendPreface(final ChannelHandlerContext ctx) {
+ if (!prefaceWritten && ctx.channel().isActive()) {
+ prefaceWritten = true;
+ ctx.writeAndFlush(connectionPrefaceBuf()).addListener(new ChannelFutureListener() {
+ @Override
+ public void operationComplete(ChannelFuture future) throws Exception {
+ if (!future.isSuccess() && ctx.channel().isOpen()) {
+ // The write failed, close the connection.
+ ctx.close();
+ } else {
+ ctx.pipeline().remove(Http2ClientPrefaceWriter.this);
+ }
+ }
+ });
+ }
+ }
+}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/encoder/Http2FrameEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/encoder/Http2FrameEncoder.java
index 24e2015a910..38ab272076d 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/encoder/Http2FrameEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/draft10/frame/encoder/Http2FrameEncoder.java
@@ -15,10 +15,7 @@
package io.netty.handler.codec.http2.draft10.frame.encoder;
-import static io.netty.handler.codec.http2.draft10.frame.Http2FrameCodecUtil.connectionPrefaceBuf;
import io.netty.buffer.ByteBuf;
-import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;
import io.netty.handler.codec.http2.draft10.frame.Http2Frame;
@@ -33,7 +30,6 @@
public class Http2FrameEncoder extends MessageToByteEncoder<Http2Frame> {
private final Http2FrameMarshaller frameMarshaller;
- private boolean prefaceWritten;
public Http2FrameEncoder() {
this(new Http2StandardFrameMarshaller());
@@ -46,22 +42,6 @@ public Http2FrameEncoder(Http2FrameMarshaller frameMarshaller) {
this.frameMarshaller = frameMarshaller;
}
- @Override
- public void channelActive(ChannelHandlerContext ctx) throws Exception {
- // The channel just became active - send the HTTP2 connection preface to the remote
- // endpoint.
- sendPreface(ctx);
-
- super.channelActive(ctx);
- }
-
- @Override
- public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
- // This handler was just added to the context. In case it was handled after
- // the connection became active, send the HTTP2 connection preface now.
- sendPreface(ctx);
- }
-
@Override
protected void encode(ChannelHandlerContext ctx, Http2Frame frame, ByteBuf out)
throws Exception {
@@ -71,22 +51,4 @@ protected void encode(ChannelHandlerContext ctx, Http2Frame frame, ByteBuf out)
ctx.fireExceptionCaught(t);
}
}
-
- /**
- * Sends the HTTP2 connection preface to the remote endpoint, if not already sent.
- */
- private void sendPreface(final ChannelHandlerContext ctx) {
- if (!prefaceWritten && ctx.channel().isActive()) {
- prefaceWritten = true;
- ctx.writeAndFlush(connectionPrefaceBuf()).addListener(new ChannelFutureListener() {
- @Override
- public void operationComplete(ChannelFuture future) throws Exception {
- if (!future.isSuccess() && ctx.channel().isOpen()) {
- // The write failed, close the connection.
- ctx.close();
- }
- }
- });
- }
- }
}
diff --git a/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java b/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
index 5efee94a995..9b515a593e1 100644
--- a/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
+++ b/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
@@ -15,7 +15,6 @@
*/
package io.netty.example.http2.client;
-import static io.netty.util.internal.logging.InternalLogLevel.INFO;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.SimpleChannelInboundHandler;
@@ -24,11 +23,14 @@
import io.netty.handler.codec.http2.draft10.connection.Http2ConnectionHandler;
import io.netty.handler.codec.http2.draft10.frame.Http2DataFrame;
import io.netty.handler.codec.http2.draft10.frame.Http2FrameCodec;
+import io.netty.handler.codec.http2.draft10.frame.encoder.Http2ClientPrefaceWriter;
import io.netty.handler.ssl.SslHandler;
import org.eclipse.jetty.npn.NextProtoNego;
import javax.net.ssl.SSLEngine;
+import static io.netty.util.internal.logging.InternalLogLevel.*;
+
/**
* Configures the client pipeline to support HTTP/2 frames.
*/
@@ -50,6 +52,7 @@ public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("ssl", new SslHandler(engine));
+ pipeline.addLast("http2ClientPrefaceWriter", new Http2ClientPrefaceWriter());
pipeline.addLast("http2FrameCodec", new Http2FrameCodec());
pipeline.addLast("http2FrameLogger", new Http2FrameLogger(INFO));
pipeline.addLast("http2ConnectionHandler", new Http2ConnectionHandler(false));
| null | train | train | 2014-04-17T21:11:23 | 2014-04-16T21:55:15Z | dragonsinth | val |
netty/netty/2410_2411 | netty/netty | netty/netty/2410 | netty/netty/2411 | [
"timestamp(timedelta=22.0, similarity=0.856703916914292)"
] | 4d279155f86d0ff32714d21ec46cb49abecff3a3 | 0a3f98d415ef81d568b91c1c8accd5bf56aad263 | [
"Sure a fix would be awesome. We love contributions :)\n\n> Am 18.04.2014 um 20:30 schrieb mkrueger92 notifications@github.com:\n> \n> Netty version: 4.0.18.Final\n> \n> Context:\n> I am using snappy to send a compressed TCP data stream of arbitrary data. The peer application has spit out a warning that the stream ... | [] | 2014-04-19T19:00:34Z | [] | Snappy - wrong chunk type for stream identifier | Netty version: 4.0.18.Final
Context:
I am using snappy to send a compressed TCP data stream of arbitrary data. The peer application has spit out a warning that the stream started with the wrong chunk type. (0x80 instead of 0xff)
In my case this is not such a big problem because 0x80 frames are within the skippable chunk range. But the warning points to a potential problem for other cases.
Please see https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt#68 for more information.
If wanted could look into the Netty code and provide a patch. Just let me know.
| [
"codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedDecoder.java",
"codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedEncoder.java"
] | [
"codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedDecoder.java",
"codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedEncoder.java"
] | [
"codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedDecoderTest.java",
"codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedEncoderTest.java"
] | diff --git a/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedDecoder.java b/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedDecoder.java
index a2db4488c62..f3dee9a8a02 100644
--- a/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedDecoder.java
+++ b/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedDecoder.java
@@ -198,7 +198,7 @@ static ChunkType mapChunkType(byte type) {
return ChunkType.COMPRESSED_DATA;
} else if (type == 1) {
return ChunkType.UNCOMPRESSED_DATA;
- } else if (type == -0x80) {
+ } else if (type == (byte) 0xff) {
return ChunkType.STREAM_IDENTIFIER;
} else if ((type & 0x80) == 0x80) {
return ChunkType.RESERVED_SKIPPABLE;
diff --git a/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedEncoder.java b/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedEncoder.java
index b22a026c2c3..1cc5301ab76 100644
--- a/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedEncoder.java
+++ b/codec/src/main/java/io/netty/handler/codec/compression/SnappyFramedEncoder.java
@@ -40,7 +40,7 @@ public class SnappyFramedEncoder extends MessageToByteEncoder<ByteBuf> {
* type 0xff, a length field of 0x6, and 'sNaPpY' in ASCII.
*/
private static final byte[] STREAM_START = {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59
};
private final Snappy snappy = new Snappy();
| diff --git a/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedDecoderTest.java b/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedDecoderTest.java
index 994369b4a17..5c8355fbd5a 100644
--- a/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedDecoderTest.java
+++ b/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedDecoderTest.java
@@ -53,7 +53,7 @@ public void testInvalidStreamIdentifierLength() throws Exception {
@Test(expected = DecompressionException.class)
public void testInvalidStreamIdentifierValue() throws Exception {
ByteBuf in = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 's', 'n', 'e', 't', 't', 'y'
+ (byte) 0xff, 0x06, 0x00, 0x00, 's', 'n', 'e', 't', 't', 'y'
});
channel.writeInbound(in);
@@ -89,7 +89,7 @@ public void testCompressedDataBeforeStreamIdentifier() throws Exception {
@Test
public void testReservedSkippableSkipsInput() throws Exception {
ByteBuf in = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
-0x7f, 0x05, 0x00, 0x00, 'n', 'e', 't', 't', 'y'
});
@@ -102,7 +102,7 @@ public void testReservedSkippableSkipsInput() throws Exception {
@Test
public void testUncompressedDataAppendsToOut() throws Exception {
ByteBuf in = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
0x01, 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 'n', 'e', 't', 't', 'y'
});
@@ -115,7 +115,7 @@ public void testUncompressedDataAppendsToOut() throws Exception {
@Test
public void testCompressedDataDecodesAndAppendsToOut() throws Exception {
ByteBuf in = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
0x00, 0x0B, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x05, // preamble length
0x04 << 2, // literal tag + length
@@ -137,7 +137,7 @@ public void testInvalidChecksumThrowsException() throws Exception {
// checksum here is presented as 0
ByteBuf in = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
0x01, 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 'n', 'e', 't', 't', 'y'
});
@@ -150,7 +150,7 @@ public void testInvalidChecksumDoesNotThrowException() throws Exception {
// checksum here is presented as a282986f (little endian)
ByteBuf in = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
0x01, 0x09, 0x00, 0x00, 0x6f, -0x68, -0x7e, -0x5e, 'n', 'e', 't', 't', 'y'
});
diff --git a/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedEncoderTest.java b/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedEncoderTest.java
index 25e2655a272..9b1d606e143 100644
--- a/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedEncoderTest.java
+++ b/codec/src/test/java/io/netty/handler/codec/compression/SnappyFramedEncoderTest.java
@@ -43,7 +43,7 @@ public void testSmallAmountOfDataIsUncompressed() throws Exception {
assertTrue(channel.finish());
ByteBuf expected = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
0x01, 0x09, 0x00, 0x00, 0x6f, -0x68, -0x7e, -0x5e, 'n', 'e', 't', 't', 'y'
});
@@ -61,7 +61,7 @@ public void testLargeAmountOfDataIsCompressed() throws Exception {
assertTrue(channel.finish());
ByteBuf expected = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
0x00, 0x0E, 0x00, 0x00, 0x3b, 0x36, -0x7f, 0x37,
0x14, 0x10,
'n', 'e', 't', 't', 'y',
@@ -83,7 +83,7 @@ public void testStreamStartIsOnlyWrittenOnce() throws Exception {
assertTrue(channel.finish());
ByteBuf expected = Unpooled.wrappedBuffer(new byte[] {
- -0x80, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
+ (byte) 0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59,
0x01, 0x09, 0x00, 0x00, 0x6f, -0x68, -0x7e, -0x5e, 'n', 'e', 't', 't', 'y',
0x01, 0x09, 0x00, 0x00, 0x6f, -0x68, -0x7e, -0x5e, 'n', 'e', 't', 't', 'y',
});
| train | train | 2014-04-18T20:41:09 | 2014-04-18T18:30:54Z | mkrueger92 | val |
netty/netty/2400_2413 | netty/netty | netty/netty/2400 | netty/netty/2413 | [
"timestamp(timedelta=27.0, similarity=0.8576720900504698)"
] | 302d11672884ecdb784ac0a521144d27c3f096c0 | 54e2c4f28a73266344ce04d20a02948eff940ea0 | [
"@Teots can I see your \"dirty reflection hack\" ;) ?\n",
"Sure you can. Here we go:\n\n```\n// channel is passed into the function and is of type Channel\nif (channel instanceof LocalChannel) {\n Class<?> clazz = channel.getClass();\n Field stateField = clazz.getDeclaredField(\"state\");\n stateField.setAc... | [
"'Check if both peer and parent are non-null because this channel was created by a ..'\n\nI would always prefer 'because' to 'as' because it's less confusing to a reader.\n"
] | 2014-04-20T17:51:47Z | [
"defect"
] | ClosedChannelException in doRegister() of LocalChannel | Hi,
we try to move a LocalChannel for one EventLoopGroup to another. Therefore, we use the following (simplyfied) code. This piece of code is called from a thread of the EventLoopGroup, which is assigned to the LocalChannel by now. The auto read function of the channel is disabled. We are using Netty 4.0.18.Final. Do you need any further information?
```
channel.deregister().addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
try {
Channel channel = future.channel();
EventLoopGroup eventLoopGroup =
if (channel instanceof LocalChannel) {
eventLoopGroup = Helper.localInputEventLoopGroup;
} else {
eventLoopGroup = Helper.networkInputEventLoopGroup;
}
// Change event loop group.
eventLoopGroup.register(channel).sync();
// Enable auto read again.
channel.config().setAutoRead(true);
} catch (Throwable t) {
LOG.error(t.getLocalizedMessage(), t);
}
}
}).sync();
```
We always get a ClosedChannelException when the register()-method is called. The reason for this is that the LocalChannel (and its peer) is closed in the deregister()-method. Is there any clean way to move a LocalChannel to another EventLoopGroup without closing it and its peer? (without applying the very dirty reflection hack we're using by now...)
| [
"transport/src/main/java/io/netty/channel/local/LocalChannel.java"
] | [
"transport/src/main/java/io/netty/channel/local/LocalChannel.java"
] | [
"transport/src/test/java/io/netty/channel/local/LocalChannelTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/local/LocalChannel.java b/transport/src/main/java/io/netty/channel/local/LocalChannel.java
index 24eb2d78510..3aba344be48 100644
--- a/transport/src/main/java/io/netty/channel/local/LocalChannel.java
+++ b/transport/src/main/java/io/netty/channel/local/LocalChannel.java
@@ -153,7 +153,12 @@ protected SocketAddress remoteAddress0() {
@Override
protected void doRegister() throws Exception {
- if (peer != null) {
+ // Check if both peer and parent are non-null because this channel was created by a LocalServerChannel.
+ // This is needed as a peer may not be null also if a LocalChannel was connected before and
+ // deregistered / registered later again.
+ //
+ // See https://github.com/netty/netty/issues/2400
+ if (peer != null && parent() != null) {
// Store the peer in a local variable as it may be set to null if doClose() is called.
// Because of this we also set registerInProgress to true as we check for this in doClose() and make sure
// we delay the fireChannelInactive() to be fired after the fireChannelActive() and so keep the correct
@@ -235,9 +240,7 @@ public void run() {
@Override
protected void doDeregister() throws Exception {
- if (isOpen()) {
- unsafe().close(unsafe().voidPromise());
- }
+ // Just remove the shutdownHook as this Channel may be closed later or registered to another EventLoop
((SingleThreadEventExecutor) eventLoop()).removeShutdownHook(shutdownHook);
}
| diff --git a/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java b/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java
index adcec5f99d7..d72d9d31974 100644
--- a/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java
+++ b/transport/src/test/java/io/netty/channel/local/LocalChannelTest.java
@@ -29,7 +29,6 @@
import io.netty.util.concurrent.EventExecutor;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
-import org.junit.Assert;
import org.junit.Test;
import java.nio.channels.ClosedChannelException;
@@ -248,6 +247,39 @@ protected void initChannel(Channel ch) throws Exception {
}
}
+ @Test
+ public void testReRegister() {
+ EventLoopGroup group1 = new LocalEventLoopGroup();
+ EventLoopGroup group2 = new LocalEventLoopGroup();
+ LocalAddress addr = new LocalAddress(LOCAL_ADDR_ID);
+ Bootstrap cb = new Bootstrap();
+ ServerBootstrap sb = new ServerBootstrap();
+
+ cb.group(group1)
+ .channel(LocalChannel.class)
+ .handler(new TestHandler());
+
+ sb.group(group2)
+ .channel(LocalServerChannel.class)
+ .childHandler(new ChannelInitializer<LocalChannel>() {
+ @Override
+ public void initChannel(LocalChannel ch) throws Exception {
+ ch.pipeline().addLast(new TestHandler());
+ }
+ });
+
+ // Start server
+ final Channel sc = sb.bind(addr).syncUninterruptibly().channel();
+
+ // Connect to the server
+ final Channel cc = cb.connect(addr).syncUninterruptibly().channel();
+
+ cc.deregister().syncUninterruptibly();
+ // Change event loop group.
+ group2.register(cc).syncUninterruptibly();
+ cc.close().syncUninterruptibly();
+ sc.close().syncUninterruptibly();
+ }
static class TestHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
| train | train | 2014-04-21T09:56:57 | 2014-04-16T14:17:38Z | Teots | val |
netty/netty/2476_2480 | netty/netty | netty/netty/2476 | netty/netty/2480 | [
"timestamp(timedelta=7073.0, similarity=0.8421856332718755)"
] | c62edc87a52e4dee7d02499a3ee5904ca59576c8 | 51adc8172975532cfbde4b52ff128ebf7373cc35 | [
"Forgot to mention that I used nightly build from oss.sonatype.org: netty-example-5.0.0.Alpha2-20140508.064249-382.jar and netty-all-5.0.0.Alpha2-20140508.064500-374.jar\n",
"@nmittler could you check ?\n",
"Yup, will take a look tomorrow. Should be easy enough to fix. Thanks for catching this @tatsuhiro-t \n... | [] | 2014-05-09T14:51:31Z | [] | Strange behavior of HTTP/2 example server | I ran nghttp HTTP/2 client (https://nghttp2.org) to HTTP/2 server (io.netty.example.http2.server). I noticed following strange behavior of the server:
1. Server rejects stream ID = 1
Stream ID = 1 initiated by client must be accepted according to the HTTP/2 specification.
The error message from the server:
[Incorrect next stream ID requested: 1])
2. Server sends SETTINGS_ENABLE_PUSH = 1
The HTTP/2 specification says that enabling server push _to_ server is protocol error.
Server must not send SETTINGS_ENABLE_PUSH = 1.
3. Server sends priority field with weight 17.
I don't know this is really a bug, but it may be a case that netty forget to decrease
the value by 1 because default weight is 16. This adjustment is required because weight range is [1, 256], but
the weight field in a frame is 8 bits.
A bit off topic: In my opinion, priority information is solely sent from client and server just obeys
them. It is no point to send or echo back priority information from server.
But this is not described in the HTTP/2 specification. So including priority in
response HEADERS is not violation to the protocol at the moment.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2FrameIOTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java",
"codec-http2/src/test/java/io/net... | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java
index fc05e1299fb..8b54bd2a38e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java
@@ -75,8 +75,7 @@ protected AbstractHttp2ConnectionHandler(boolean server, boolean allowCompressio
}
protected AbstractHttp2ConnectionHandler(Http2Connection connection) {
- this(connection, new DefaultHttp2FrameReader(connection.isServer()),
- new DefaultHttp2FrameWriter(connection.isServer()),
+ this(connection, new DefaultHttp2FrameReader(), new DefaultHttp2FrameWriter(),
new DefaultHttp2InboundFlowController(), new DefaultHttp2OutboundFlowController());
}
@@ -165,9 +164,12 @@ public final Http2Settings settings() {
Http2Settings settings = new Http2Settings();
settings.allowCompressedData(connection.local().allowCompressedData());
settings.initialWindowSize(inboundFlow.initialInboundWindowSize());
- settings.pushEnabled(connection.local().allowPushTo());
settings.maxConcurrentStreams(connection.remote().maxStreams());
settings.maxHeaderTableSize(frameReader.maxHeaderTableSize());
+ if (!connection.isServer()) {
+ // Only set the pushEnabled flag if this is a client endpoint.
+ settings.pushEnabled(connection.local().allowPushTo());
+ }
return settings;
}
@@ -295,14 +297,17 @@ protected ChannelFuture writeSettings(ChannelHandlerContext ctx, ChannelPromise
throw protocolError("Sending settings after connection going away.");
}
- if (settings.hasAllowCompressedData()) {
- connection.local().allowCompressedData(settings.allowCompressedData());
- }
-
if (settings.hasPushEnabled()) {
+ if (connection.isServer()) {
+ throw protocolError("Server sending SETTINGS frame with ENABLE_PUSH specified");
+ }
connection.local().allowPushTo(settings.pushEnabled());
}
+ if (settings.hasAllowCompressedData()) {
+ connection.local().allowCompressedData(settings.allowCompressedData());
+ }
+
if (settings.hasMaxConcurrentStreams()) {
connection.remote().maxStreams(settings.maxConcurrentStreams());
}
@@ -358,6 +363,9 @@ protected ChannelFuture writePushPromise(ChannelHandlerContext ctx, ChannelPromi
protected ChannelFuture writeAltSvc(ChannelHandlerContext ctx, ChannelPromise promise,
int streamId, long maxAge, int port, ByteBuf protocolId, String host, String origin)
throws Http2Exception {
+ if (!connection.isServer()) {
+ throw protocolError("Client sending ALT_SVC frame");
+ }
return frameWriter.writeAltSvc(ctx, promise, streamId, maxAge, port, protocolId, host,
origin);
}
@@ -658,6 +666,13 @@ public void onSettingsAckRead(ChannelHandlerContext ctx) throws Http2Exception {
@Override
public void onSettingsRead(ChannelHandlerContext ctx, Http2Settings settings)
throws Http2Exception {
+ if (settings.hasPushEnabled()) {
+ if (!connection.isServer()) {
+ throw protocolError("Client received SETTINGS frame with ENABLE_PUSH specified");
+ }
+ connection.remote().allowPushTo(settings.pushEnabled());
+ }
+
if (settings.hasAllowCompressedData()) {
connection.remote().allowCompressedData(settings.allowCompressedData());
}
@@ -666,10 +681,6 @@ public void onSettingsRead(ChannelHandlerContext ctx, Http2Settings settings)
connection.local().maxStreams(settings.maxConcurrentStreams());
}
- if (settings.hasPushEnabled()) {
- connection.remote().allowPushTo(settings.pushEnabled());
- }
-
if (settings.hasMaxHeaderTableSize()) {
frameWriter.maxHeaderTableSize(settings.maxHeaderTableSize());
}
@@ -759,6 +770,9 @@ public void onWindowUpdateRead(ChannelHandlerContext ctx, int streamId,
@Override
public void onAltSvcRead(ChannelHandlerContext ctx, int streamId, long maxAge, int port,
ByteBuf protocolId, String host, String origin) throws Http2Exception {
+ if (connection.isServer()) {
+ throw protocolError("Server received ALT_SVC frame");
+ }
AbstractHttp2ConnectionHandler.this.onAltSvcRead(ctx, streamId, maxAge, port,
protocolId, host, origin);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index db7656d0d04..03ac306a22a 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -41,17 +41,15 @@ public class DefaultHttp2Connection implements Http2Connection {
private final DefaultEndpoint remoteEndpoint;
private boolean goAwaySent;
private boolean goAwayReceived;
- private boolean server;
public DefaultHttp2Connection(boolean server, boolean allowCompressedData) {
- this.server = server;
localEndpoint = new DefaultEndpoint(server, allowCompressedData);
remoteEndpoint = new DefaultEndpoint(!server, false);
}
@Override
public boolean isServer() {
- return server;
+ return localEndpoint.isServer();
}
@Override
@@ -216,19 +214,26 @@ public boolean localSideOpen() {
* Simple endpoint implementation.
*/
private final class DefaultEndpoint implements Endpoint {
+ private final boolean server;
private int nextStreamId;
private int lastStreamCreated;
- private int maxStreams = Integer.MAX_VALUE;
- private boolean pushToAllowed = true;
+ private int maxStreams;
+ private boolean pushToAllowed;
private boolean allowCompressedData;
- DefaultEndpoint(boolean serverEndpoint, boolean allowCompressedData) {
- // Determine the starting stream ID for this endpoint. Zero is reserved for the
- // connection and 1 is reserved for responding to an upgrade from HTTP 1.1.
- // Client-initiated streams use odd identifiers and server-initiated streams use
- // even.
- nextStreamId = serverEndpoint ? 2 : 3;
+ DefaultEndpoint(boolean server, boolean allowCompressedData) {
this.allowCompressedData = allowCompressedData;
+ this.server = server;
+
+ // Determine the starting stream ID for this endpoint. Client-initiated streams
+ // are odd and server-initiated streams are even. Zero is reserved for the
+ // connection. Stream 1 is reserved client-initiated stream for responding to an
+ // upgrade from HTTP 1.1.
+ nextStreamId = server ? 2 : 1;
+
+ // Push is disallowed by default for servers and allowed for clients.
+ pushToAllowed = !server;
+ maxStreams = Integer.MAX_VALUE;
}
@Override
@@ -244,7 +249,7 @@ public DefaultStream createStream(int streamId, boolean halfClosed) throws Http2
}
// Update the next and last stream IDs.
- nextStreamId += 2;
+ nextStreamId = streamId + 2;
lastStreamCreated = streamId;
// Register the stream and mark it as active.
@@ -253,6 +258,11 @@ public DefaultStream createStream(int streamId, boolean halfClosed) throws Http2
return stream;
}
+ @Override
+ public boolean isServer() {
+ return server;
+ }
+
@Override
public DefaultStream reservePushStream(int streamId, Http2Stream parent)
throws Http2Exception {
@@ -271,7 +281,7 @@ public DefaultStream reservePushStream(int streamId, Http2Stream parent)
stream.state = isLocal() ? State.RESERVED_LOCAL : State.RESERVED_REMOTE;
// Update the next and last stream IDs.
- nextStreamId += 2;
+ nextStreamId = streamId + 2;
lastStreamCreated = streamId;
// Register the stream.
@@ -281,6 +291,9 @@ public DefaultStream reservePushStream(int streamId, Http2Stream parent)
@Override
public void allowPushTo(boolean allow) {
+ if (allow && server) {
+ throw new IllegalArgumentException("Servers do not allow push");
+ }
pushToAllowed = allow;
}
@@ -323,14 +336,24 @@ private void checkNewStreamAllowed(int streamId) throws Http2Exception {
if (isGoAway()) {
throw protocolError("Cannot create a stream since the connection is going away");
}
+ verifyStreamId(streamId);
+ if (streamMap.size() + 1 > maxStreams) {
+ throw protocolError("Maximum streams exceeded for this endpoint.");
+ }
+ }
+
+ private void verifyStreamId(int streamId) throws Http2Exception {
if (nextStreamId < 0) {
throw protocolError("No more streams can be created on this connection");
}
- if (streamId != nextStreamId) {
- throw protocolError("Incorrect next stream ID requested: %d", streamId);
+ if (streamId < nextStreamId) {
+ throw protocolError("Request stream %d is behind the next expected stream %d",
+ streamId, nextStreamId);
}
- if (streamMap.size() + 1 > maxStreams) {
- throw protocolError("Maximum streams exceeded for this endpoint.");
+ boolean even = (streamId & 1) == 0;
+ if (server != even) {
+ throw protocolError("Request stream %d is not correct for %s connection", streamId,
+ server ? "server" : "client");
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
index d9fd082b4d2..cf67a14ebc3 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
@@ -44,7 +44,6 @@ private enum State {
ERROR
}
- private final boolean server;
private final Http2HeadersDecoder headersDecoder;
private State state = State.FRAME_HEADER;
@@ -54,12 +53,11 @@ private enum State {
private int payloadLength;
private HeadersContinuation headersContinuation;
- public DefaultHttp2FrameReader(boolean server) {
- this(server, new DefaultHttp2HeadersDecoder());
+ public DefaultHttp2FrameReader() {
+ this(new DefaultHttp2HeadersDecoder());
}
- public DefaultHttp2FrameReader(boolean server, Http2HeadersDecoder headersDecoder) {
- this.server = server;
+ public DefaultHttp2FrameReader(Http2HeadersDecoder headersDecoder) {
this.headersDecoder = headersDecoder;
}
@@ -373,10 +371,6 @@ private void verifyAltSvcFrame() throws Http2Exception {
verifyStreamOrConnectionId(streamId, "Stream ID");
verifyPayloadLength(payloadLength);
- if (server) {
- throw protocolError("ALT_SVC frames must not be received by servers");
- }
-
if (payloadLength < 8) {
throw protocolError("Frame length too small." + payloadLength);
}
@@ -420,7 +414,7 @@ private void readHeadersFrame(final ChannelHandlerContext ctx, ByteBuf payload,
long word1 = payload.readUnsignedInt();
final boolean exclusive = (word1 & 0x80000000L) > 0;
final int streamDependency = (int) (word1 & 0x7FFFFFFFL);
- final short headersWeight = payload.readUnsignedByte();
+ final short weight = (short) (payload.readUnsignedByte() + 1);
final ByteBuf fragment = payload.readSlice(payload.readableBytes() - padding);
// Create a handler that invokes the observer when the header block is complete.
@@ -437,7 +431,7 @@ public void processFragment(boolean endOfHeaders, ByteBuf fragment, int padding,
if (endOfHeaders) {
Http2Headers headers = builder().buildHeaders();
observer.onHeadersRead(ctx, headersStreamId, headers, streamDependency,
- headersWeight, exclusive, padding, headersFlags.endOfStream(),
+ weight, exclusive, padding, headersFlags.endOfStream(),
headersFlags.endOfSegment());
close();
}
@@ -480,7 +474,7 @@ private void readPriorityFrame(ChannelHandlerContext ctx, ByteBuf payload,
long word1 = payload.readUnsignedInt();
boolean exclusive = (word1 & 0x80000000L) > 0;
int streamDependency = (int) (word1 & 0x7FFFFFFFL);
- short weight = payload.readUnsignedByte();
+ short weight = (short) (payload.readUnsignedByte() + 1);
observer.onPriorityRead(ctx, streamId, streamDependency, weight, exclusive);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
index 2d4c03608ea..a26eefe44d4 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameWriter.java
@@ -21,6 +21,8 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_UNSIGNED_BYTE;
import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_UNSIGNED_INT;
import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_UNSIGNED_SHORT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
import static io.netty.handler.codec.http2.Http2CodecUtil.PRIORITY_ENTRY_LENGTH;
import static io.netty.handler.codec.http2.Http2CodecUtil.SETTINGS_COMPRESS_DATA;
import static io.netty.handler.codec.http2.Http2CodecUtil.SETTINGS_ENABLE_PUSH;
@@ -43,15 +45,13 @@
*/
public class DefaultHttp2FrameWriter implements Http2FrameWriter {
- private final boolean server;
private final Http2HeadersEncoder headersEncoder;
- public DefaultHttp2FrameWriter(boolean server) {
- this(server, new DefaultHttp2HeadersEncoder());
+ public DefaultHttp2FrameWriter() {
+ this(new DefaultHttp2HeadersEncoder());
}
- public DefaultHttp2FrameWriter(boolean server, Http2HeadersEncoder headersEncoder) {
- this.server = server;
+ public DefaultHttp2FrameWriter(Http2HeadersEncoder headersEncoder) {
this.headersEncoder = headersEncoder;
}
@@ -133,7 +133,9 @@ public ChannelFuture writePriority(ChannelHandlerContext ctx, ChannelPromise pro
Http2Flags.EMPTY, streamId);
long word1 = exclusive ? (0x80000000L | streamDependency) : streamDependency;
writeUnsignedInt(word1, frame);
- frame.writeByte(weight);
+
+ // Adjust the weight so that it fits into a single byte on the wire.
+ frame.writeByte(weight - 1);
return ctx.writeAndFlush(frame, promise);
} catch (RuntimeException e) {
throw failAndThrow(promise, e);
@@ -189,6 +191,7 @@ public ChannelFuture writeSettings(ChannelHandlerContext ctx, ChannelPromise pro
writeUnsignedInt(settings.maxConcurrentStreams(), frame);
}
if (settings.hasPushEnabled()) {
+ // Only write the enable push flag from client endpoints.
frame.writeByte(SETTINGS_ENABLE_PUSH);
writeUnsignedInt(settings.pushEnabled() ? 1L : 0L, frame);
}
@@ -327,9 +330,6 @@ public ChannelFuture writeWindowUpdate(ChannelHandlerContext ctx, ChannelPromise
public ChannelFuture writeAltSvc(ChannelHandlerContext ctx, ChannelPromise promise,
int streamId, long maxAge, int port, ByteBuf protocolId, String host, String origin) {
try {
- if (!server) {
- throw new IllegalArgumentException("ALT_SVC frames must not be sent by clients");
- }
verifyStreamOrConnectionId(streamId, "Stream ID");
verifyMaxAge(maxAge);
verifyPort(port);
@@ -428,7 +428,9 @@ private ChannelFuture writeHeadersInternal(ChannelHandlerContext ctx, ChannelPro
if (hasPriority) {
long word1 = exclusive ? (0x80000000L | streamDependency) : streamDependency;
writeUnsignedInt(word1, firstFrame);
- firstFrame.writeByte(weight);
+
+ // Adjust the weight so that it fits into a single byte on the wire.
+ firstFrame.writeByte(weight - 1);
}
// Write the first fragment.
@@ -542,7 +544,7 @@ private static void verifyPayloadLength(int payloadLength) {
}
private static void verifyWeight(short weight) {
- if (weight < 1 || weight > MAX_UNSIGNED_BYTE) {
+ if (weight < MIN_WEIGHT || weight > MAX_WEIGHT) {
throw new IllegalArgumentException("Invalid weight: " + weight);
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2PriorityTree.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2PriorityTree.java
index 4aca82aa782..e834a82caab 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2PriorityTree.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2PriorityTree.java
@@ -16,8 +16,8 @@
package io.netty.handler.codec.http2;
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
-import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_UNSIGNED_BYTE;
-import io.netty.handler.codec.http2.Http2PriorityTree.Priority;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
import java.util.ArrayDeque;
import java.util.Collections;
@@ -62,7 +62,7 @@ public Priority<T> prioritize(int streamId, int parent, short weight, boolean ex
if (parent < 0) {
throw new IllegalArgumentException("Parent stream ID must be >= 0");
}
- if (weight < 1 || weight > MAX_UNSIGNED_BYTE) {
+ if (weight < MIN_WEIGHT || weight > MAX_WEIGHT) {
throw new IllegalArgumentException("Invalid weight: " + weight);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index 48f0500a213..b40931878c5 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -43,6 +43,8 @@ public final class Http2CodecUtil {
public static final int SETTING_ENTRY_LENGTH = 5;
public static final int PRIORITY_ENTRY_LENGTH = 5;
public static final int INT_FIELD_LENGTH = 4;
+ public static final short MAX_WEIGHT = (short) 256;
+ public static final short MIN_WEIGHT = (short) 1;
public static final short SETTINGS_HEADER_TABLE_SIZE = 1;
public static final short SETTINGS_ENABLE_PUSH = 2;
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
index c88687f791b..15d07543646 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
@@ -55,13 +55,19 @@ interface Endpoint {
*/
Http2Stream reservePushStream(int streamId, Http2Stream parent) throws Http2Exception;
+ /**
+ * Indicates whether or not this endpoint is the server-side of the connection.
+ */
+ boolean isServer();
+
/**
* Sets whether server push is allowed to this endpoint.
*/
void allowPushTo(boolean allow);
/**
- * Gets whether or not server push is allowed to this endpoint.
+ * Gets whether or not server push is allowed to this endpoint. This is always false
+ * for a server endpoint.
*/
boolean allowPushTo();
@@ -97,7 +103,7 @@ interface Endpoint {
}
/**
- * Indicates whether or not this endpoint is the server-side of the connection.
+ * Indicates whether or not the local endpoint for this connection is the server.
*/
boolean isServer();
diff --git a/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java b/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java
index dfd590db6aa..8f09b738daf 100644
--- a/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java
+++ b/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java
@@ -192,10 +192,10 @@ public BlockingQueue<ChannelFuture> queue() {
}
private static Http2FrameReader frameReader() {
- return new Http2InboundFrameLogger(new DefaultHttp2FrameReader(false), logger);
+ return new Http2InboundFrameLogger(new DefaultHttp2FrameReader(), logger);
}
private static Http2FrameWriter frameWriter() {
- return new Http2OutboundFrameLogger(new DefaultHttp2FrameWriter(false), logger);
+ return new Http2OutboundFrameLogger(new DefaultHttp2FrameWriter(), logger);
}
}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index 21526af9474..e3344c2f9c3 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -116,17 +116,27 @@ public void serverReservePushStreamShouldSucceed() throws Http2Exception {
@Test
public void clientReservePushStreamShouldSucceed() throws Http2Exception {
- Http2Stream stream = client.remote().createStream(2, true);
- Http2Stream pushStream = client.local().reservePushStream(3, stream);
- assertEquals(3, pushStream.id());
+ Http2Stream stream = server.remote().createStream(3, true);
+ Http2Stream pushStream = server.local().reservePushStream(4, stream);
+ assertEquals(4, pushStream.id());
assertEquals(State.RESERVED_LOCAL, pushStream.state());
- assertEquals(1, client.activeStreams().size());
- assertEquals(3, client.local().lastStreamCreated());
+ assertEquals(1, server.activeStreams().size());
+ assertEquals(4, server.local().lastStreamCreated());
+ }
+
+ @Test(expected = Http2Exception.class)
+ public void newStreamBehindExpectedShouldThrow() throws Http2Exception {
+ server.local().createStream(0, true);
+ }
+
+ @Test(expected = Http2Exception.class)
+ public void newStreamNotForServerShouldThrow() throws Http2Exception {
+ server.local().createStream(11, true);
}
@Test(expected = Http2Exception.class)
- public void createStreamWithInvalidIdShouldThrow() throws Http2Exception {
- server.remote().createStream(1, true);
+ public void newStreamNotForClientShouldThrow() throws Http2Exception {
+ client.local().createStream(10, true);
}
@Test(expected = Http2Exception.class)
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2FrameIOTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2FrameIOTest.java
index bd619529e0a..9912716a04a 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2FrameIOTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2FrameIOTest.java
@@ -62,8 +62,8 @@ public void setup() {
when(ctx.alloc()).thenReturn(alloc);
- reader = new DefaultHttp2FrameReader(false);
- writer = new DefaultHttp2FrameWriter(true);
+ reader = new DefaultHttp2FrameReader();
+ writer = new DefaultHttp2FrameWriter();
}
@Test
@@ -118,7 +118,7 @@ public void emptySettingsShouldRoundtrip() throws Exception {
}
@Test
- public void settingsShouldRoundtrip() throws Exception {
+ public void settingsShouldStripShouldRoundtrip() throws Exception {
Http2Settings settings = new Http2Settings();
settings.pushEnabled(true);
settings.maxHeaderTableSize(4096);
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java
index a6549eedfb7..97bf53a4f3d 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java
@@ -396,8 +396,17 @@ public void settingsReadWithAckShouldNotifyObserver() throws Exception {
verify(observer).onSettingsAckRead(eq(ctx));
}
+ @Test(expected = Http2Exception.class)
+ public void clientSettingsReadWithPushShouldThrow() throws Exception {
+ when(connection.isServer()).thenReturn(false);
+ Http2Settings settings = new Http2Settings();
+ settings.pushEnabled(true);
+ decode().onSettingsRead(ctx, settings);
+ }
+
@Test
public void settingsReadShouldSetValues() throws Exception {
+ when(connection.isServer()).thenReturn(true);
Http2Settings settings = new Http2Settings();
settings.pushEnabled(true);
settings.initialWindowSize(123);
@@ -416,12 +425,28 @@ public void settingsReadShouldSetValues() throws Exception {
}
@Test
- public void goAwayShoultShouldUpdateConnectionState() throws Exception {
+ public void goAwayShouldReadShouldUpdateConnectionState() throws Exception {
decode().onGoAwayRead(ctx, 1, 2, EMPTY_BUFFER);
verify(connection).goAwayReceived();
verify(observer).onGoAwayRead(eq(ctx), eq(1), eq(2L), eq(EMPTY_BUFFER));
}
+ @Test(expected = Http2Exception.class)
+ public void serverAltSvcReadShouldThrow() throws Exception {
+ when(connection.isServer()).thenReturn(true);
+ decode().onAltSvcRead(ctx, STREAM_ID, 1, 2, Unpooled.EMPTY_BUFFER, "www.example.com", null);
+ }
+
+ @Test
+ public void clientAltSvcReadShouldNotifyObserver() throws Exception {
+ String host = "www.host.com";
+ String origin = "www.origin.com";
+ when(connection.isServer()).thenReturn(false);
+ decode().onAltSvcRead(ctx, STREAM_ID, 1, 2, EMPTY_BUFFER, host, origin);
+ verify(observer).onAltSvcRead(eq(ctx), eq(STREAM_ID), eq(1L), eq(2), eq(EMPTY_BUFFER),
+ eq(host), eq(origin));
+ }
+
@Test(expected = Http2Exception.class)
public void dataWriteAfterGoAwayShouldFail() throws Exception {
when(connection.isGoAway()).thenReturn(true);
@@ -576,6 +601,23 @@ public void settingsWriteShouldUpdateSettings() throws Exception {
verify(reader).maxHeaderTableSize(2000);
}
+ @Test(expected = Http2Exception.class)
+ public void clientWriteAltSvcShouldThrow() throws Exception {
+ when(connection.isServer()).thenReturn(false);
+ handler.writeAltSvc(ctx, promise, STREAM_ID, 1, 2, Unpooled.EMPTY_BUFFER,
+ "www.example.com", null);
+ }
+
+ @Test
+ public void serverWriteAltSvcShouldSucceed() throws Exception {
+ String host = "www.host.com";
+ String origin = "www.origin.com";
+ when(connection.isServer()).thenReturn(true);
+ handler.writeAltSvc(ctx, promise, STREAM_ID, 1, 2, EMPTY_BUFFER, host, origin);
+ verify(writer).writeAltSvc(eq(ctx), eq(promise), eq(STREAM_ID), eq(1L), eq(2),
+ eq(EMPTY_BUFFER), eq(host), eq(origin));
+ }
+
private static ByteBuf dummyData() {
// The buffer is purposely 8 bytes so it will even work for a ping frame.
return Unpooled.wrappedBuffer("abcdefgh".getBytes(UTF_8));
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
index e5df13c08c3..3a7f261a4da 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
@@ -69,7 +69,7 @@ public void setup() throws Exception {
MockitoAnnotations.initMocks(this);
requestLatch = new CountDownLatch(1);
- frameWriter = new DefaultHttp2FrameWriter(false);
+ frameWriter = new DefaultHttp2FrameWriter();
dataCaptor = ArgumentCaptor.forClass(ByteBuf.class);
sb = new ServerBootstrap();
@@ -259,7 +259,7 @@ private final class FrameAdapter extends ByteToMessageDecoder {
FrameAdapter(Http2FrameObserver observer) {
this.observer = observer;
- reader = new DefaultHttp2FrameReader(observer != null);
+ reader = new DefaultHttp2FrameReader();
}
@Override
| train | train | 2014-05-09T15:43:00 | 2014-05-08T16:29:58Z | tatsuhiro-t | val |
netty/netty/2237_2494 | netty/netty | netty/netty/2237 | netty/netty/2494 | [
"timestamp(timedelta=42.0, similarity=0.9299720569086561)"
] | 856c89dd708fe13f5a8ed2d4fdff8af05580ca7e | bb47cdca801084979529b87361e4930b59e98b2b | [
"The following test demonstrates the problem. This is run with the netty 4.0.15.Final version of Netty.\n\n``` java\n\nimport io.netty.bootstrap.Bootstrap;\nimport io.netty.bootstrap.ServerBootstrap;\nimport io.netty.buffer.ByteBuf;\nimport io.netty.buffer.Unpooled;\nimport io.netty.channel.Channel;\nimport io.nett... | [] | 2014-05-13T12:21:50Z | [
"defect"
] | ChannelTrafficShapingHandler may corrupt inbound data stream | As can be seen from the code of channelRead in AbstractTrafficShapingHandler, inbound data may be passed to the next handler in a scheduled Runnable.
However, it is reason to believe that
inboud data that arrives later may be passed on before the Runnable is executed, and so the order in the inbound stream is not maintained.
| [
"handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java"
] | [
"handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java b/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java
index 7492886e2f6..e4f016048af 100644
--- a/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java
+++ b/handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java
@@ -245,17 +245,6 @@ public void channelRead(final ChannelHandlerContext ctx, final Object msg) throw
}
ctx.executor().schedule(reopenTask, wait,
TimeUnit.MILLISECONDS);
- } else {
- // Create a Runnable to update the next handler in the chain. If one was create before it will
- // just be reused to limit object creation
- Runnable bufferUpdateTask = new Runnable() {
- @Override
- public void run() {
- ctx.fireChannelRead(msg);
- }
- };
- ctx.executor().schedule(bufferUpdateTask, wait, TimeUnit.MILLISECONDS);
- return;
}
}
}
| null | train | train | 2014-05-13T07:25:26 | 2014-02-16T18:25:21Z | racorn | val |
netty/netty/2436_2531 | netty/netty | netty/netty/2436 | netty/netty/2531 | [
"timestamp(timedelta=87.0, similarity=0.8885360305547618)"
] | d0640a6686b735044b74ea3f31ac08da2f62093e | b57f6d833b0c188fb1a37642cb4f814f39aa7eef | [
"Yeah, I was quite lazy when writing this. Another food for thought is: do we really need a swapped buffer? Why don't we just add a little endian getters and setters into `ByteBuf`? It will make it a little bit crowded but it actually should simplify things.\n",
"@trustin lol :) What you mean with getter and se... | [] | 2014-06-04T05:09:43Z | [
"improvement"
] | ByteBuf implementation should only invert bytes if ByteOrder differ from native ByteOrder | @pavanka made me aware of an issue with how we handle native ByteOrder and endianess. Basically we always assume that native ByteOrder is BIG_ENDIAN and so we end up invert bytes when ByteOrder.LITTLE_ENDIAN is set on the ByteBuf and the native ByteOrder is LITTLE_ENDIAN.
We should change our implementation to act more like what ByteBuffer does which is only invert bytes if it the ByteOrder of the ByteBuf differs from the native ByteOrder.
``` java
//java/nio/ByteBuffer.java
boolean bigEndian = true;
boolean nativeByteOrder = (Bits.byteOrder() == ByteOrder.BIG_ENDIAN);
// default is set as bigEndian, however if we are running on a littleEndian
// machines then Bits.byteOrder() == ByteOrder.LITTLE_ENDIAN so
// nativeOrder = false, till here behavior is the same in netty as well
// now say I set order(LITTLE_ENDIAN) then per this code from
// ByteBuffer.java, nativeByteOrder = true so no swapping in code underneath
public final ByteBuffer order(ByteOrder bo) {
bigEndian = (bo == ByteOrder.BIG_ENDIAN);
nativeByteOrder =
(bigEndian == (Bits.byteOrder() == ByteOrder.BIG_ENDIAN));
return this;
}
//java/nio/DirectByteBuffer.java
//even on a little endian machine nativeOrder = true so no bits swap
private ByteBuffer putInt(long a, int x) {
if (unaligned) {
int y = (x);
unsafe.putInt(a, (nativeByteOrder ? y : Bits.swap(y)));
} else {
Bits.putInt(a, x, bigEndian);
}
return this;
}
```
Compare this with Netty:
``` java
/io/netty/buffer/PooledUnsafeDirectByteBuf.java
private static final boolean NATIVE_ORDER = ByteOrder.nativeOrder() == ByteOrder.BIG_ENDIAN;
// in netty however NATIVE_ORDER is always false on little endian
// machines so underneath
// since NATIVE_ORDER is always false so bytes are always reversed.
@Override
protected void _setInt(int index, int value) {
PlatformDependent.putInt(addr(index), NATIVE_ORDER ? value : Integer.reverseBytes(value));
}
// even setting buf.order() is not of much help because default is always
// assumed to be big-endian so the buffer is wrapped by a swapped
//bytebuf, so any writes are always swapped
@Override
public ByteBuf order(ByteOrder endianness) {
if (endianness == null) {
throw new NullPointerException("endianness");
}
if (endianness == order()) {
return this;
}
SwappedByteBuf swappedBuf = this.swappedBuf;
if (swappedBuf == null) {
this.swappedBuf = swappedBuf = new SwappedByteBuf(this);
}
return swappedBuf;
}
@Override
public ByteOrder order() {
return ByteOrder.BIG_ENDIAN;
}
```
| [
"buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java",
"buffer/src/main/java/io/netty/buffer/PooledUnsafeDirectByteBuf.java",
"buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java",
"buffer/src/main/java/io/netty/buffer/UnpooledUnsafeDirectByteBuf.java",
"microbench/pom.xml"
] | [
"buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java",
"buffer/src/main/java/io/netty/buffer/PooledUnsafeDirectByteBuf.java",
"buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java",
"buffer/src/main/java/io/netty/buffer/UnpooledUnsafeDirectByteBuf.java",
"buffer/src/main/java/io/netty/buffer/Unsaf... | [
"microbench/src/test/java/io/netty/microbench/buffer/SwappedByteBufBenchmark.java"
] | diff --git a/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java b/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java
index 1ec4afa71fa..92e4a471bdf 100644
--- a/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/AbstractByteBuf.java
@@ -38,7 +38,7 @@ public abstract class AbstractByteBuf extends ByteBuf {
static final ResourceLeakDetector<ByteBuf> leakDetector = new ResourceLeakDetector<ByteBuf>(ByteBuf.class);
int readerIndex;
- private int writerIndex;
+ int writerIndex;
private int markedReaderIndex;
private int markedWriterIndex;
@@ -321,11 +321,18 @@ public ByteBuf order(ByteOrder endianness) {
SwappedByteBuf swappedBuf = this.swappedBuf;
if (swappedBuf == null) {
- this.swappedBuf = swappedBuf = new SwappedByteBuf(this);
+ this.swappedBuf = swappedBuf = newSwappedByteBuf();
}
return swappedBuf;
}
+ /**
+ * Creates a new {@link SwappedByteBuf} for this {@link ByteBuf} instance.
+ */
+ protected SwappedByteBuf newSwappedByteBuf() {
+ return new SwappedByteBuf(this);
+ }
+
@Override
public byte getByte(int index) {
checkIndex(index);
diff --git a/buffer/src/main/java/io/netty/buffer/PooledUnsafeDirectByteBuf.java b/buffer/src/main/java/io/netty/buffer/PooledUnsafeDirectByteBuf.java
index 4d15509021c..795335e4c35 100644
--- a/buffer/src/main/java/io/netty/buffer/PooledUnsafeDirectByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/PooledUnsafeDirectByteBuf.java
@@ -381,4 +381,9 @@ public long memoryAddress() {
private long addr(int index) {
return memoryAddress + index;
}
+
+ @Override
+ protected SwappedByteBuf newSwappedByteBuf() {
+ return new UnsafeDirectSwappedByteBuf(this, memoryAddress);
+ }
}
diff --git a/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java b/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java
index a11538117eb..3c917c4f849 100644
--- a/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/SwappedByteBuf.java
@@ -27,7 +27,7 @@
/**
* Wrapper which swap the {@link ByteOrder} of a {@link ByteBuf}.
*/
-public final class SwappedByteBuf extends ByteBuf {
+public class SwappedByteBuf extends ByteBuf {
private final ByteBuf buf;
private final ByteOrder order;
diff --git a/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeDirectByteBuf.java b/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeDirectByteBuf.java
index 89df31acde7..9b6cb76f78c 100644
--- a/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeDirectByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/UnpooledUnsafeDirectByteBuf.java
@@ -514,4 +514,9 @@ public ByteBuf unwrap() {
long addr(int index) {
return memoryAddress + index;
}
+
+ @Override
+ protected SwappedByteBuf newSwappedByteBuf() {
+ return new UnsafeDirectSwappedByteBuf(this, memoryAddress);
+ }
}
diff --git a/buffer/src/main/java/io/netty/buffer/UnsafeDirectSwappedByteBuf.java b/buffer/src/main/java/io/netty/buffer/UnsafeDirectSwappedByteBuf.java
new file mode 100644
index 00000000000..5c346f38943
--- /dev/null
+++ b/buffer/src/main/java/io/netty/buffer/UnsafeDirectSwappedByteBuf.java
@@ -0,0 +1,181 @@
+/*
+* Copyright 2014 The Netty Project
+*
+* The Netty Project licenses this file to you under the Apache License,
+* version 2.0 (the "License"); you may not use this file except in compliance
+* with the License. You may obtain a copy of the License at:
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+* License for the specific language governing permissions and limitations
+* under the License.
+*/
+
+package io.netty.buffer;
+
+import io.netty.util.internal.PlatformDependent;
+
+import java.nio.ByteOrder;
+
+/**
+ * Special {@link SwappedByteBuf} for {@link ByteBuf}s that are backed by a {@code memoryAddress}.
+ */
+final class UnsafeDirectSwappedByteBuf extends SwappedByteBuf {
+ private static final boolean NATIVE_ORDER = ByteOrder.nativeOrder() == ByteOrder.BIG_ENDIAN;
+ private final boolean nativeByteOrder;
+ private final AbstractByteBuf wrapped;
+ private final long memoryAddress;
+
+ UnsafeDirectSwappedByteBuf(AbstractByteBuf buf, long memoryAddress) {
+ super(buf);
+ wrapped = buf;
+ this.memoryAddress = memoryAddress;
+ nativeByteOrder = NATIVE_ORDER == (order() == ByteOrder.BIG_ENDIAN);
+ }
+
+ private long addr(int index) {
+ return memoryAddress + index;
+ }
+
+ @Override
+ public long getLong(int index) {
+ wrapped.checkIndex(index, 8);
+ long v = PlatformDependent.getLong(addr(index));
+ return nativeByteOrder? v : Long.reverseBytes(v);
+ }
+
+ @Override
+ public float getFloat(int index) {
+ return Float.intBitsToFloat(getInt(index));
+ }
+
+ @Override
+ public double getDouble(int index) {
+ return Double.longBitsToDouble(getLong(index));
+ }
+
+ @Override
+ public char getChar(int index) {
+ return (char) getShort(index);
+ }
+
+ @Override
+ public long getUnsignedInt(int index) {
+ return getInt(index) & 0xFFFFFFFFL;
+ }
+
+ @Override
+ public int getInt(int index) {
+ wrapped.checkIndex(index, 4);
+ int v = PlatformDependent.getInt(addr(index));
+ return nativeByteOrder? v : Integer.reverseBytes(v);
+ }
+
+ @Override
+ public int getUnsignedShort(int index) {
+ return getShort(index) & 0xFFFF;
+ }
+
+ @Override
+ public short getShort(int index) {
+ wrapped.checkIndex(index, 2);
+ short v = PlatformDependent.getShort(addr(index));
+ return nativeByteOrder? v : Short.reverseBytes(v);
+ }
+
+ @Override
+ public ByteBuf setShort(int index, int value) {
+ wrapped.checkIndex(index, 2);
+ _setShort(index, value);
+ return this;
+ }
+
+ @Override
+ public ByteBuf setInt(int index, int value) {
+ wrapped.checkIndex(index, 4);
+ _setInt(index, value);
+ return this;
+ }
+
+ @Override
+ public ByteBuf setLong(int index, long value) {
+ wrapped.checkIndex(index, 8);
+ _setLong(index, value);
+ return this;
+ }
+
+ @Override
+ public ByteBuf setChar(int index, int value) {
+ setShort(index, value);
+ return this;
+ }
+
+ @Override
+ public ByteBuf setFloat(int index, float value) {
+ setInt(index, Float.floatToRawIntBits(value));
+ return this;
+ }
+
+ @Override
+ public ByteBuf setDouble(int index, double value) {
+ setLong(index, Double.doubleToRawLongBits(value));
+ return this;
+ }
+
+ @Override
+ public ByteBuf writeShort(int value) {
+ wrapped.ensureWritable(2);
+ _setShort(wrapped.writerIndex, value);
+ wrapped.writerIndex += 2;
+ return this;
+ }
+
+ @Override
+ public ByteBuf writeInt(int value) {
+ wrapped.ensureWritable(4);
+ _setInt(wrapped.writerIndex, value);
+ wrapped.writerIndex += 4;
+ return this;
+ }
+
+ @Override
+ public ByteBuf writeLong(long value) {
+ wrapped.ensureWritable(8);
+ _setLong(wrapped.writerIndex, value);
+ wrapped.writerIndex += 8;
+ return this;
+ }
+
+ @Override
+ public ByteBuf writeChar(int value) {
+ writeShort(value);
+ return this;
+ }
+
+ @Override
+ public ByteBuf writeFloat(float value) {
+ writeInt(Float.floatToRawIntBits(value));
+ return this;
+ }
+
+ @Override
+ public ByteBuf writeDouble(double value) {
+ writeLong(Double.doubleToRawLongBits(value));
+ return this;
+ }
+
+ private void _setShort(int index, int value) {
+ PlatformDependent.putShort(addr(index), nativeByteOrder ? (short) value : Short.reverseBytes((short) value));
+ }
+
+ private void _setInt(int index, int value) {
+ PlatformDependent.putInt(addr(index), nativeByteOrder ? value : Integer.reverseBytes(value));
+ }
+
+ private void _setLong(int index, long value) {
+ PlatformDependent.putLong(addr(index), nativeByteOrder ? value : Long.reverseBytes(value));
+ }
+}
diff --git a/microbench/pom.xml b/microbench/pom.xml
index cdf7a0e9bee..be9b2e27825 100644
--- a/microbench/pom.xml
+++ b/microbench/pom.xml
@@ -47,7 +47,13 @@
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
- <version>0.4.1</version>
+ <version>0.8</version>
+ </dependency>
+ <dependency>
+ <groupId>org.openjdk.jmh</groupId>
+ <artifactId>jmh-generator-annprocess</artifactId>
+ <version>0.8</version>
+ <scope>provided</scope>
</dependency>
</dependencies>
| diff --git a/microbench/src/test/java/io/netty/microbench/buffer/SwappedByteBufBenchmark.java b/microbench/src/test/java/io/netty/microbench/buffer/SwappedByteBufBenchmark.java
new file mode 100644
index 00000000000..e359a6c23cf
--- /dev/null
+++ b/microbench/src/test/java/io/netty/microbench/buffer/SwappedByteBufBenchmark.java
@@ -0,0 +1,80 @@
+/*
+* Copyright 2014 The Netty Project
+*
+* The Netty Project licenses this file to you under the Apache License,
+* version 2.0 (the "License"); you may not use this file except in compliance
+* with the License. You may obtain a copy of the License at:
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+* License for the specific language governing permissions and limitations
+* under the License.
+*/
+package io.netty.microbench.buffer;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.SwappedByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.microbench.util.AbstractMicrobenchmark;
+import org.openjdk.jmh.annotations.GenerateMicroBenchmark;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+
+import java.nio.ByteOrder;
+
+@State(Scope.Benchmark)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class SwappedByteBufBenchmark extends AbstractMicrobenchmark {
+ private ByteBuf swappedByteBuf;
+ private ByteBuf unsafeSwappedByteBuf;
+
+ @Setup
+ public void setup() {
+ swappedByteBuf = new SwappedByteBuf(Unpooled.directBuffer(8));
+ unsafeSwappedByteBuf = Unpooled.directBuffer(8).order(ByteOrder.LITTLE_ENDIAN);
+ if (unsafeSwappedByteBuf.getClass().equals(SwappedByteBuf.class)) {
+ throw new IllegalStateException("Should not use " + SwappedByteBuf.class.getSimpleName());
+ }
+ }
+
+ @Param("16384")
+ public int size;
+
+ @GenerateMicroBenchmark
+ public void swappedByteBufSetInt() {
+ swappedByteBuf.setLong(0, size);
+ }
+
+ @GenerateMicroBenchmark
+ public void swappedByteBufSetShort() {
+ swappedByteBuf.setShort(0, size);
+ }
+
+ @GenerateMicroBenchmark
+ public void swappedByteBufSetLong() {
+ swappedByteBuf.setLong(0, size);
+ }
+
+ @GenerateMicroBenchmark
+ public void unsafeSwappedByteBufSetInt() {
+ unsafeSwappedByteBuf.setInt(0, size);
+ }
+
+ @GenerateMicroBenchmark
+ public void unsafeSwappedByteBufSetShort() {
+ unsafeSwappedByteBuf.setShort(0, size);
+ }
+
+ @GenerateMicroBenchmark
+ public void unsafeSwappedByteBufSetLong() {
+ unsafeSwappedByteBuf.setLong(0, size);
+ }
+}
| test | train | 2014-06-03T20:23:44 | 2014-04-29T07:27:11Z | normanmaurer | val |
netty/netty/2565_2566 | netty/netty | netty/netty/2565 | netty/netty/2566 | [
"timestamp(timedelta=48801.0, similarity=0.8672136286651443)"
] | 7d37af5dfb33c9ca67c596f1371aec6bf54d46a9 | aa5e59da2cf9ac4fdb7cac5719151263d35c6564 | [
"@philbaxter thanks merged in :)\n"
] | [] | 2014-06-13T09:36:31Z | [] | sun imports should be marked optional | As per the title. sun.security.util and sun.security.x509 should be marked as optional in the manifest.
| [
"pom.xml"
] | [
"pom.xml"
] | [] | diff --git a/pom.xml b/pom.xml
index 7b742b346a4..a215f129fb8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -766,7 +766,7 @@
<instructions>
<Export-Package>${project.groupId}.*</Export-Package>
<!-- enforce JVM vendor package as optional -->
- <Import-Package>sun.misc.*;resolution:=optional,sun.nio.ch;resolution:=optional,*</Import-Package>
+ <Import-Package>sun.misc.*;resolution:=optional,sun.nio.ch;resolution:=optional,sun.security.*;resolution:=optional,*</Import-Package>
<!-- override "internal" private package convention -->
<Private-Package>!*</Private-Package>
</instructions>
| null | train | train | 2014-06-13T11:02:16 | 2014-06-13T08:07:54Z | philbaxter | val |
netty/netty/2588_2591 | netty/netty | netty/netty/2588 | netty/netty/2591 | [
"timestamp(timedelta=120.0, similarity=0.9140302153138531)"
] | 7162d96ed502f601fcaf7f16fa543caf2456b37a | f06056882c51c48db16f665fc1ddc44eb6ec55c9 | [
"@trustin so either we let ioBuffer() not use size of 0 as default or allow to configure it. I would prefer the first. WDYT ?\n",
"It's due to a historical reason - long time ago in the early days of 4.0.AlphaX, we allocated a buffer per handler. Having a non-zero capacity buffer from the beginning of the connec... | [] | 2014-06-23T05:06:19Z | [
"improvement"
] | MessageToByteEncoder always starts with ByteBuf that use initalCapacity == 0 | The problem is that we use ByteBufAllocator.ioBuffer() which returns a ByteBuf with intialCapacity of 0 when preferDirect is specified in the constructor.
| [
"buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java",
"buffer/src/main/java/io/netty/buffer/ByteBufAllocator.java",
"codec/src/main/java/io/netty/handler/codec/MessageToByteEncoder.java"
] | [
"buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java",
"buffer/src/main/java/io/netty/buffer/ByteBufAllocator.java",
"codec/src/main/java/io/netty/handler/codec/MessageToByteEncoder.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java b/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java
index 4d39bc29eed..34e5d2c6dd4 100644
--- a/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java
+++ b/buffer/src/main/java/io/netty/buffer/AbstractByteBufAllocator.java
@@ -96,9 +96,9 @@ public ByteBuf buffer(int initialCapacity, int maxCapacity) {
@Override
public ByteBuf ioBuffer() {
if (PlatformDependent.hasUnsafe()) {
- return directBuffer(0);
+ return directBuffer(DEFAULT_INITIAL_CAPACITY);
}
- return heapBuffer(0);
+ return heapBuffer(DEFAULT_INITIAL_CAPACITY);
}
@Override
diff --git a/buffer/src/main/java/io/netty/buffer/ByteBufAllocator.java b/buffer/src/main/java/io/netty/buffer/ByteBufAllocator.java
index e1778020b24..ca7df32ce0b 100644
--- a/buffer/src/main/java/io/netty/buffer/ByteBufAllocator.java
+++ b/buffer/src/main/java/io/netty/buffer/ByteBufAllocator.java
@@ -43,7 +43,7 @@ public interface ByteBufAllocator {
ByteBuf buffer(int initialCapacity, int maxCapacity);
/**
- * Allocate a {@link ByteBuf} whose initial capacity is 0, preferably a direct buffer which is suitable for I/O.
+ * Allocate a {@link ByteBuf}, preferably a direct buffer which is suitable for I/O.
*/
ByteBuf ioBuffer();
diff --git a/codec/src/main/java/io/netty/handler/codec/MessageToByteEncoder.java b/codec/src/main/java/io/netty/handler/codec/MessageToByteEncoder.java
index f6c506b2375..0aab4247c21 100644
--- a/codec/src/main/java/io/netty/handler/codec/MessageToByteEncoder.java
+++ b/codec/src/main/java/io/netty/handler/codec/MessageToByteEncoder.java
@@ -102,11 +102,7 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
if (acceptOutboundMessage(msg)) {
@SuppressWarnings("unchecked")
I cast = (I) msg;
- if (preferDirect) {
- buf = ctx.alloc().ioBuffer();
- } else {
- buf = ctx.alloc().heapBuffer();
- }
+ buf = allocateBuffer(ctx, cast, preferDirect);
try {
encode(ctx, cast, buf);
} finally {
@@ -134,6 +130,19 @@ public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
}
}
+ /**
+ * Allocate a {@link ByteBuf} which will be used as argument of {@link #encode(ChannelHandlerContext, I, ByteBuf)}.
+ * Sub-classes may override this method to returna {@link ByteBuf} with a perfect matching {@code initialCapacity}.
+ */
+ protected ByteBuf allocateBuffer(ChannelHandlerContext ctx, @SuppressWarnings("unused") I msg,
+ boolean preferDirect) throws Exception {
+ if (preferDirect) {
+ return ctx.alloc().ioBuffer();
+ } else {
+ return ctx.alloc().heapBuffer();
+ }
+ }
+
/**
* Encode a message into a {@link ByteBuf}. This method will be called for each written message that can be handled
* by this encoder.
| null | train | train | 2014-06-21T12:19:49 | 2014-06-20T16:27:02Z | normanmaurer | val |
netty/netty/2607_2608 | netty/netty | netty/netty/2607 | netty/netty/2608 | [
"timestamp(timedelta=28887.0, similarity=0.8797419366114805)"
] | af042bb7d2afca348948a9dbada1f13da58a5fb7 | 5dabf809f27ecee7062f8fb7983e824b06f81512 | [
"Pull request accepted...issue closed.\n"
] | [] | 2014-06-26T19:06:31Z | [] | HTTP2 Client Example read losing data | The `onDataRead` method in `Http2ClientConnectionHandler.java` HTTP2 example is incorrectly re-allocating the bytebuf when it needs to be re-sized.
| [
"example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java"
] | [
"example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java"
] | [] | diff --git a/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java b/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java
index d9face6903c..e2c740a245b 100644
--- a/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java
+++ b/example/src/main/java/io/netty/example/http2/client/Http2ClientConnectionHandler.java
@@ -128,8 +128,8 @@ public void onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, in
collectedData.writeBytes(data);
} else {
// Expand the buffer
- ByteBuf newBuffer = ctx().alloc().buffer(data.readableBytes() + available);
- newBuffer.writeBytes(data);
+ ByteBuf newBuffer = ctx().alloc().buffer(collectedData.readableBytes() + available);
+ newBuffer.writeBytes(collectedData);
newBuffer.writeBytes(data);
collectedData.release();
collectedData = newBuffer;
| null | train | train | 2014-06-26T12:24:44 | 2014-06-26T18:58:28Z | Scottmitch | val |
netty/netty/2642_2645 | netty/netty | netty/netty/2642 | netty/netty/2645 | [
"timestamp(timedelta=35.0, similarity=1.0000000000000002)"
] | 696e355a7098dadb46edeb19148e2e222789d6e7 | 8db5346eda5aef86df47662a0363bf8b1b58e4c2 | [
"> roughly 1.79 GB or 1.40% of the GC pressure during performance testing\n\nOn a curious sidenote, what profiler do you use to get to these numbers?\n",
"@jakobbuchgraber you can use Java Mission Control for this.\n",
"@jakobbuchgraber I used Java Mission Control's Flight Record to get this data.\n"
] | [] | 2014-07-08T17:37:38Z | [
"improvement"
] | CompositeByteBuf.deallocate memory improvement | In profiling my application, I have noticed that `CompositeByteBuf.deallocate` generates `java.util.ArrayList$Itr` roughly 1.79 GB or 1.40% of the GC pressure during performance testing. This is due to it using the 'enhanced' for-loop. This can be resolved by converting to the old-style for loop as shown below:
From:
``` java
for (Component c: components) {
c.freeIfNecessary();
}
```
To:
``` java
int size = components.size();
for (int i = 0; i < size; i++) {
components.get(i).freeIfNecessary();
}
```
Tomorrow, when I'm more awake, I'll submit a pull request if @normanmaurer or @trustin don't get to it before me :smile:
| [
"buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java"
] | [
"buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
index 818c97569f4..d5cf14c27da 100644
--- a/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/CompositeByteBuf.java
@@ -1599,8 +1599,11 @@ protected void deallocate() {
}
freed = true;
- for (Component c: components) {
- c.freeIfNecessary();
+ int size = components.size();
+ // We're not using foreach to avoid creating an iterator.
+ // see https://github.com/netty/netty/issues/2642
+ for (int i = 0; i < size; i++) {
+ components.get(i).freeIfNecessary();
}
if (leak != null) {
| null | test | train | 2014-07-07T09:44:50 | 2014-07-07T20:50:04Z | blucas | val |
netty/netty/2659_2676 | netty/netty | netty/netty/2659 | netty/netty/2676 | [
"timestamp(timedelta=41.0, similarity=0.9051057703458023)"
] | ea62455f628abe8e13ac1583d8ea7f0c511e19f8 | 50186fbcf53e5e9943bc9ce945d48d9f82323275 | [
"opinali@ thanks for the find! Do you think you could change the code to capture the state of the map when the assertion occurs?\n",
"@nmittler shouldn't be hard; let me try this.\n",
"well, it's being harder than expected... I have added code to dump the map state, but after redeploy, couldn't reproduce the e... | [
"Please use for (;;) as we use for(;;) for this kind of stuff everywhere in netty. Just for the sake of consistency\n",
"IllegalStateException ?\n",
"I think if we do v.equals(values[i]) we can eliminate the null check\n",
"return false if null is given as value\n",
"same as above\n",
"Please replace wit... | 2014-07-19T22:20:47Z | [
"defect"
] | Bug in the new IntObjectHashMap.java | "Should never happen" condition has happened :) This may be difficult to reproduce, I noticed two events like this spaced by ~5 minutes on a server processing 20Kqps.
22:59:04.632 [epollEventLoopGroup-3-3] WARN io.netty.channel.AbstractChannel - Failed to close a channel.
java.io.IOException: Error closing file descriptor
at io.netty.channel.epoll.Native.close(Native Method) ~[netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.epoll.AbstractEpollChannel.doClose(AbstractEpollChannel.java:66) ~[netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.closeForcibly(AbstractChannel.java:580) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.bootstrap.ServerBootstrap$ServerBootstrapAcceptor.forceClose(ServerBootstrap.java:266) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.bootstrap.ServerBootstrap$ServerBootstrapAcceptor.access$100(ServerBootstrap.java:213) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.bootstrap.ServerBootstrap$ServerBootstrapAcceptor$1.operationComplete(ServerBootstrap.java:256) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.bootstrap.ServerBootstrap$ServerBootstrapAcceptor$1.operationComplete(ServerBootstrap.java:252) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:679) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:565) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:425) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:732) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:451) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$100(AbstractChannel.java:375) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:419) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:370) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:268) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
22:59:04.633 [epollEventLoopGroup-3-3] WARN io.netty.bootstrap.ServerBootstrap - Failed to register an accepted channel: [id: 0x73af18bb, /74.125.183.1:42353 :> /10.203.57.186:8080]
java.lang.AssertionError: Unable to insert
at io.netty.util.collection.IntObjectHashMap.put(IntObjectHashMap.java:149) ~[netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.epoll.EpollEventLoop.add(EpollEventLoop.java:132) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.epoll.AbstractEpollChannel.doRegister(AbstractEpollChannel.java:156) ~[netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:440) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$100(AbstractChannel.java:375) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:419) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:370) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:268) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [netty-all-4.0.21.Final.jar:4.0.21.Final]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_55]
| [
"common/src/main/java/io/netty/util/collection/IntObjectHashMap.java"
] | [
"common/src/main/java/io/netty/util/collection/IntObjectHashMap.java"
] | [
"common/src/test/java/io/netty/util/collection/IntObjectHashMapTest.java"
] | diff --git a/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java b/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java
index da5dc9e834d..34480188d61 100644
--- a/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java
+++ b/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java
@@ -21,40 +21,36 @@
import java.util.NoSuchElementException;
/**
- * A hash map implementation of {@link IntObjectMap} that uses open addressing for keys. To minimize
- * the memory footprint, this class uses open addressing rather than chaining. Collisions are
- * resolved using double hashing.
+ * A hash map implementation of {@link IntObjectMap} that uses open addressing for keys.
+ * To minimize the memory footprint, this class uses open addressing rather than chaining.
+ * Collisions are resolved using linear probing. Deletions implement compaction, so cost of
+ * remove can approach O(N) for full maps, which makes a small loadFactor recommended.
*
* @param <V> The value type stored in the map.
*/
public class IntObjectHashMap<V> implements IntObjectMap<V>, Iterable<IntObjectMap.Entry<V>> {
- /** State indicating that a slot is available.*/
- private static final byte AVAILABLE = 0;
-
- /** State indicating that a slot is occupied. */
- private static final byte OCCUPIED = 1;
-
- /** State indicating that a slot was removed. */
- private static final byte REMOVED = 2;
-
/** Default initial capacity. Used if not specified in the constructor */
private static final int DEFAULT_CAPACITY = 11;
/** Default load factor. Used if not specified in the constructor */
private static final float DEFAULT_LOAD_FACTOR = 0.5f;
+ /**
+ * Placeholder for null values, so we can use the actual null to mean available.
+ * (Better than using a placeholder for available: less references for GC processing.)
+ */
+ private static final Object NULL_VALUE = new Object();
+
/** The maximum number of elements allowed without allocating more space. */
private int maxSize;
/** The load factor for the map. Used to calculate {@link #maxSize}. */
private final float loadFactor;
- private byte[] states;
private int[] keys;
private V[] values;
private int size;
- private int available;
public IntObjectHashMap() {
this(DEFAULT_CAPACITY, DEFAULT_LOAD_FACTOR);
@@ -68,93 +64,71 @@ public IntObjectHashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 1) {
throw new IllegalArgumentException("initialCapacity must be >= 1");
}
- if (loadFactor <= 0.0f) {
- throw new IllegalArgumentException("loadFactor must be > 0");
+ if (loadFactor <= 0.0f || loadFactor > 1.0f) {
+ // Cannot exceed 1 because we can never store more than capacity elements;
+ // using a bigger loadFactor would trigger rehashing before the desired load is reached.
+ throw new IllegalArgumentException("loadFactor must be > 0 and <= 1");
}
this.loadFactor = loadFactor;
// Adjust the initial capacity if necessary.
- initialCapacity = adjustCapacity(initialCapacity);
+ int capacity = adjustCapacity(initialCapacity);
// Allocate the arrays.
- states = new byte[initialCapacity];
- keys = new int[initialCapacity];
- @SuppressWarnings({ "unchecked", "SuspiciousArrayCast" })
- V[] temp = (V[]) new Object[initialCapacity];
+ keys = new int[capacity];
+ @SuppressWarnings({ "unchecked", })
+ V[] temp = (V[]) new Object[capacity];
values = temp;
// Initialize the maximum size value.
- maxSize = calcMaxSize(initialCapacity);
+ maxSize = calcMaxSize(capacity);
+ }
- // Initialize the available element count
- available = initialCapacity - size;
+ private static <T> T toExternal(T value) {
+ return value == NULL_VALUE ? null : value;
+ }
+
+ @SuppressWarnings("unchecked")
+ private static <T> T toInternal(T value) {
+ return value == null ? (T) NULL_VALUE : value;
}
@Override
public V get(int key) {
int index = indexOf(key);
- return index < 0 ? null : values[index];
+ return index == -1 ? null : toExternal(values[index]);
}
@Override
public V put(int key, V value) {
- int hash = hash(key);
- int capacity = capacity();
- int index = hash % capacity;
- int increment = 1 + hash % (capacity - 2);
- final int startIndex = index;
- int firstRemovedIndex = -1;
- do {
- switch (states[index]) {
- case AVAILABLE:
- // We only stop probing at a AVAILABLE node, since the value may still exist
- // beyond
- // a REMOVED node.
- if (firstRemovedIndex != -1) {
- // We encountered a REMOVED node prior. Store the entry there so that
- // retrieval
- // will be faster.
- insertAt(firstRemovedIndex, key, value);
- return null;
- }
-
- // No REMOVED node, just store the entry here.
- insertAt(index, key, value);
- return null;
- case OCCUPIED:
- if (keys[index] == key) {
- V previousValue = values[index];
- insertAt(index, key, value);
- return previousValue;
- }
- break;
- case REMOVED:
- // Check for first removed index.
- if (firstRemovedIndex == -1) {
- firstRemovedIndex = index;
- }
- break;
- default:
- throw new AssertionError("Invalid state: " + states[index]);
+ int startIndex = hashIndex(key);
+ int index = startIndex;
+
+ for (;;) {
+ if (values[index] == null) {
+ // Found empty slot, use it.
+ keys[index] = key;
+ values[index] = toInternal(value);
+ growSize();
+ return null;
+ } else if (keys[index] == key) {
+ // Found existing entry with this key, just replace the value.
+ V previousValue = values[index];
+ values[index] = toInternal(value);
+ return toExternal(previousValue);
}
- // REMOVED or OCCUPIED but wrong key, keep probing ...
- index += increment;
- if (index >= capacity) {
- // Handle wrap-around by decrement rather than mod.
- index -= capacity;
+ // Conflict, keep probing ...
+ if ((index = probeNext(index)) == startIndex) {
+ // Can only happen if the map was full at MAX_ARRAY_SIZE and couldn't grow.
+ throw new IllegalStateException("Unable to insert");
}
- } while (index != startIndex);
-
- if (firstRemovedIndex == -1) {
- // Should never happen.
- throw new AssertionError("Unable to insert");
}
+ }
- // Never found a AVAILABLE slot, just use the first REMOVED.
- insertAt(firstRemovedIndex, key, value);
- return null;
+ private int probeNext(int index) {
+ return index == values.length - 1 ? 0 : index + 1;
}
@Override
@@ -162,9 +136,11 @@ public void putAll(IntObjectMap<V> sourceMap) {
if (sourceMap instanceof IntObjectHashMap) {
// Optimization - iterate through the arrays.
IntObjectHashMap<V> source = (IntObjectHashMap<V>) sourceMap;
- int i = -1;
- while ((i = source.nextEntryIndex(i + 1)) >= 0) {
- put(source.keys[i], source.values[i]);
+ for (int i = 0; i < source.values.length; ++i) {
+ V sourceValue = source.values[i];
+ if (sourceValue != null) {
+ put(source.keys[i], sourceValue);
+ }
}
return;
}
@@ -178,13 +154,13 @@ public void putAll(IntObjectMap<V> sourceMap) {
@Override
public V remove(int key) {
int index = indexOf(key);
- if (index < 0) {
+ if (index == -1) {
return null;
}
V prev = values[index];
removeAt(index);
- return prev;
+ return toExternal(prev);
}
@Override
@@ -199,10 +175,9 @@ public boolean isEmpty() {
@Override
public void clear() {
- Arrays.fill(states, AVAILABLE);
+ Arrays.fill(keys, 0);
Arrays.fill(values, null);
size = 0;
- available = capacity();
}
@Override
@@ -212,10 +187,10 @@ public boolean containsKey(int key) {
@Override
public boolean containsValue(V value) {
- int i = -1;
- while ((i = nextEntryIndex(i + 1)) >= 0) {
- V next = values[i];
- if (value == next || value != null && value.equals(next)) {
+ V v = toInternal(value);
+ for (int i = 0; i < values.length; ++i) {
+ // The map supports null values; this will be matched as NULL_VALUE.equals(NULL_VALUE).
+ if (values[i] != null && values[i].equals(v)) {
return true;
}
}
@@ -235,7 +210,12 @@ public Iterator<Entry<V>> iterator() {
@Override
public int[] keys() {
int[] outKeys = new int[size()];
- copyEntries(keys, outKeys);
+ int targetIx = 0;
+ for (int i = 0; i < values.length; ++i) {
+ if (values[i] != null) {
+ outKeys[targetIx++] = keys[i];
+ }
+ }
return outKeys;
}
@@ -243,58 +223,63 @@ public int[] keys() {
public V[] values(Class<V> clazz) {
@SuppressWarnings("unchecked")
V[] outValues = (V[]) Array.newInstance(clazz, size());
- copyEntries(values, outValues);
+ int targetIx = 0;
+ for (int i = 0; i < values.length; ++i) {
+ if (values[i] != null) {
+ outValues[targetIx++] = values[i];
+ }
+ }
return outValues;
}
@Override
public int hashCode() {
- final int prime = 31;
- int result = 1;
- for (Entry<V> entry : entries()) {
- V value = entry.value();
- int hash = value == null ? 0 : value.hashCode();
- result = prime * result + hash;
+ // Hashcode is based on all non-zero, valid keys. We have to scan the whole keys
+ // array, which may have different lengths for two maps of same size(), so the
+ // capacity cannot be used as input for hashing but the size can.
+ int hash = size;
+ for (int i = 0; i < keys.length; ++i) {
+ // 0 can be a valid key or unused slot, but won't impact the hashcode in either case.
+ // This way we can use a cheap loop without conditionals, or hard-to-unroll operations,
+ // or the devastatingly bad memory locality of visiting value objects.
+ // Also, it's important to use a hash function that does not depend on the ordering
+ // of terms, only their values; since the map is an unordered collection and
+ // entries can end up in different positions in different maps that have the same
+ // elements, but with different history of puts/removes, due to conflicts.
+ hash = hash ^ keys[i];
}
- return result;
+ return hash;
}
@Override
public boolean equals(Object obj) {
- if (this == obj || obj == null || getClass() != obj.getClass()) {
+ if (this == obj) {
return true;
+ } else if (!(obj instanceof IntObjectMap)) {
+ return false;
}
@SuppressWarnings("rawtypes")
- IntObjectHashMap other = (IntObjectHashMap) obj;
- if (size != other.size) {
+ IntObjectMap other = (IntObjectMap) obj;
+ if (size != other.size()) {
return false;
}
- for (Entry<V> entry : entries()) {
- V value = entry.value();
- Object otherValue = other.get(entry.key());
- if (value == null) {
- if (otherValue != null || !other.containsKey(entry.key())) {
+ for (int i = 0; i < values.length; ++i) {
+ V value = values[i];
+ if (value != null) {
+ int key = keys[i];
+ Object otherValue = other.get(key);
+ if (value == NULL_VALUE) {
+ if (otherValue != null) {
+ return false;
+ }
+ } else if (!value.equals(otherValue)) {
return false;
}
- } else if (!value.equals(otherValue)) {
- return false;
}
}
return true;
}
- /**
- * Copies the occupied entries from the source to the target array.
- */
- private void copyEntries(Object sourceArray, Object targetArray) {
- int sourceIx = -1;
- int targetIx = 0;
- while ((sourceIx = nextEntryIndex(sourceIx + 1)) >= 0) {
- Object obj = Array.get(sourceArray, sourceIx);
- Array.set(targetArray, targetIx++, obj);
- }
- }
-
/**
* Locates the index for the given key. This method probes using double hashing.
*
@@ -302,117 +287,94 @@ private void copyEntries(Object sourceArray, Object targetArray) {
* @return the index where the key was found, or {@code -1} if no entry is found for that key.
*/
private int indexOf(int key) {
- int hash = hash(key);
- int capacity = capacity();
- int increment = 1 + hash % (capacity - 2);
- int index = hash % capacity;
- int startIndex = index;
- do {
- switch(states[index]) {
- case AVAILABLE:
- // It's available, so no chance that this value exists anywhere in the map.
- return -1;
- case OCCUPIED:
- if (key == keys[index]) {
- // Found it!
- return index;
- }
- break;
- default:
- break;
+ int startIndex = hashIndex(key);
+ int index = startIndex;
+
+ for (;;) {
+ if (values[index] == null) {
+ // It's available, so no chance that this value exists anywhere in the map.
+ return -1;
+ } else if (key == keys[index]) {
+ return index;
}
- // REMOVED or OCCUPIED but wrong key, keep probing ...
- index += increment;
- if (index >= capacity) {
- // Handle wrap-around by decrement rather than mod.
- index -= capacity;
+ // Conflict, keep probing ...
+ if ((index = probeNext(index)) == startIndex) {
+ return -1;
}
- } while (index != startIndex);
-
- // Got back to the beginning. Not found.
- return -1;
- }
-
- /**
- * Determines the current capacity (i.e. size of the arrays).
- */
- private int capacity() {
- return keys.length;
+ }
}
/**
- * Creates a hash value for the given key.
+ * Returns the hashed index for the given key.
*/
- private static int hash(int key) {
- // Just make sure the integer is positive.
- return key & Integer.MAX_VALUE;
+ private int hashIndex(int key) {
+ return key % keys.length;
}
/**
- * Performs an insert of the key/value at the given index position. If necessary, performs a
- * rehash of the map.
- *
- * @param index the index at which to insert the key/value
- * @param key the entry key
- * @param value the entry value
+ * Grows the map size after an insertion. If necessary, performs a rehash of the map.
*/
- private void insertAt(int index, int key, V value) {
- byte state = states[index];
- if (state != OCCUPIED) {
- // Added a new mapping, increment the size.
- size++;
-
- if (state == AVAILABLE) {
- // Consumed a OCCUPIED slot, decrement the number of available slots.
- available--;
- }
- }
-
- keys[index] = key;
- values[index] = value;
- states[index] = OCCUPIED;
+ private void growSize() {
+ size++;
if (size > maxSize) {
- // Need to grow the arrays.
- rehash(adjustCapacity(capacity() * 2));
- } else if (available == 0) {
+ // Need to grow the arrays. We take care to detect integer overflow,
+ // also limit array size to ArrayList.MAX_ARRAY_SIZE.
+ rehash(adjustCapacity((int) Math.min(keys.length * 2.0, Integer.MAX_VALUE - 8)));
+ } else if (size == keys.length) {
// Open addressing requires that we have at least 1 slot available. Need to refresh
// the arrays to clear any removed elements.
- rehash(capacity());
+ rehash(keys.length);
}
}
/**
* Adjusts the given capacity value to ensure that it's odd. Even capacities can break probing.
- * TODO: would be better to ensure it's prime as well.
*/
private static int adjustCapacity(int capacity) {
return capacity | 1;
}
/**
- * Marks the entry at the given index position as {@link #REMOVED} and sets the value to
- * {@code null}.
- * <p>
- * TODO: consider performing re-compaction.
+ * Removes entry at the given index position. Also performs opportunistic, incremental rehashing
+ * if necessary to not break conflict chains.
*
* @param index the index position of the element to remove.
*/
private void removeAt(int index) {
- if (states[index] == OCCUPIED) {
- size--;
- }
- states[index] = REMOVED;
+ --size;
+ // Clearing the key is not strictly necessary (for GC like in a regular collection),
+ // but recommended for security. The memory location is still fresh in the cache anyway.
+ keys[index] = 0;
values[index] = null;
+
+ // In the interval from index to the next available entry, the arrays may have entries
+ // that are displaced from their base position due to prior conflicts. Iterate these
+ // entries and move them back if possible, optimizing future lookups.
+ // Knuth Section 6.4 Algorithm R, also used by the JDK's IdentityHashMap.
+
+ int nextFree = index;
+ for (int i = probeNext(index); values[i] != null; i = probeNext(i)) {
+ int bucket = hashIndex(keys[i]);
+ if ((i < bucket && (bucket <= nextFree || nextFree <= i))
+ || (bucket <= nextFree && nextFree <= i)) {
+ // Move the displaced entry "back" to the first available position.
+ keys[nextFree] = keys[i];
+ values[nextFree] = values[i];
+ // Put the first entry after the displaced entry
+ keys[i] = 0;
+ values[i] = null;
+ nextFree = i;
+ }
+ }
}
/**
* Calculates the maximum size allowed before rehashing.
*/
private int calcMaxSize(int capacity) {
- // Clip the upper bound so that there will always be at least one
- // available slot.
+ // Clip the upper bound so that there will always be at least one available slot.
int upperBound = capacity - 1;
return Math.min(upperBound, (int) (capacity * loadFactor));
}
@@ -423,61 +385,62 @@ private int calcMaxSize(int capacity) {
* @param newCapacity the new capacity for the map.
*/
private void rehash(int newCapacity) {
- int oldCapacity = capacity();
int[] oldKeys = keys;
V[] oldVals = values;
- byte[] oldStates = states;
- // New states array is automatically initialized to AVAILABLE (i.e. 0 == AVAILABLE).
- states = new byte[newCapacity];
keys = new int[newCapacity];
- @SuppressWarnings({ "unchecked", "SuspiciousArrayCast" })
+ @SuppressWarnings({ "unchecked" })
V[] temp = (V[]) new Object[newCapacity];
values = temp;
- size = 0;
- available = newCapacity;
maxSize = calcMaxSize(newCapacity);
- // Insert the new states.
- for (int i = 0; i < oldCapacity; ++i) {
- if (oldStates[i] == OCCUPIED) {
- put(oldKeys[i], oldVals[i]);
- }
- }
- }
+ // Insert to the new arrays.
+ for (int i = 0; i < oldVals.length; ++i) {
+ V oldVal = oldVals[i];
+ if (oldVal != null) {
+ // Inlined put(), but much simpler: we don't need to worry about
+ // duplicated keys, growing/rehashing, or failing to insert.
+ int oldKey = oldKeys[i];
+ int startIndex = hashIndex(oldKey);
+ int index = startIndex;
+
+ for (;;) {
+ if (values[index] == null) {
+ keys[index] = oldKey;
+ values[index] = toInternal(oldVal);
+ break;
+ }
- /**
- * Returns the next index of the next entry in the map.
- *
- * @param index the index at which to begin the search.
- * @return the index of the next entry, or {@code -1} if not found.
- */
- private int nextEntryIndex(int index) {
- int capacity = capacity();
- for (; index < capacity; ++index) {
- if (states[index] == OCCUPIED) {
- return index;
+ // Conflict, keep probing. Can wrap around, but never reaches startIndex again.
+ index = probeNext(index);
+ }
}
}
- return -1;
}
/**
* Iterator for traversing the entries in this map.
*/
- private final class IteratorImpl implements Iterator<Entry<V>> {
- int prevIndex;
- int nextIndex;
-
- IteratorImpl() {
- prevIndex = -1;
- nextIndex = nextEntryIndex(0);
+ private final class IteratorImpl implements Iterator<Entry<V>>, Entry<V> {
+ private int prevIndex = -1;
+ private int nextIndex = -1;
+ private int entryIndex = -1;
+
+ private void scanNext() {
+ for (;;) {
+ if (++nextIndex == values.length || values[nextIndex] != null) {
+ break;
+ }
+ }
}
@Override
public boolean hasNext() {
- return nextIndex >= 0;
+ if (nextIndex == -1) {
+ scanNext();
+ }
+ return nextIndex < keys.length;
}
@Override
@@ -487,43 +450,54 @@ public Entry<V> next() {
}
prevIndex = nextIndex;
- nextIndex = nextEntryIndex(nextIndex + 1);
- return new EntryImpl(prevIndex);
+ scanNext();
+
+ // Always return the same Entry object, just change its index each time.
+ entryIndex = prevIndex;
+ return this;
}
@Override
public void remove() {
if (prevIndex < 0) {
- throw new IllegalStateException("Next must be called before removing.");
+ throw new IllegalStateException("next must be called before each remove.");
}
removeAt(prevIndex);
prevIndex = -1;
}
- }
- /**
- * {@link Entry} implementation that just references the key/value at the given index position.
- */
- private final class EntryImpl implements Entry<V> {
- final int index;
-
- EntryImpl(int index) {
- this.index = index;
- }
+ // Entry implementation. Since this implementation uses a single Entry, we coalesce that
+ // into the Iterator object (potentially making loop optimization much easier).
@Override
public int key() {
- return keys[index];
+ return keys[entryIndex];
}
@Override
public V value() {
- return values[index];
+ return toExternal(values[entryIndex]);
}
@Override
public void setValue(V value) {
- values[index] = value;
+ values[entryIndex] = toInternal(value);
+ }
+ }
+
+ @Override
+ public String toString() {
+ if (size == 0) {
+ return "{}";
+ }
+ StringBuilder sb = new StringBuilder(4 * size);
+ for (int i = 0; i < values.length; ++i) {
+ V value = values[i];
+ if (value != null) {
+ sb.append(sb.length() == 0 ? "{" : ", ");
+ sb.append(keys[i]).append('=').append(value == this ? "(this Map)" : value);
+ }
}
+ return sb.append('}').toString();
}
}
| diff --git a/common/src/test/java/io/netty/util/collection/IntObjectHashMapTest.java b/common/src/test/java/io/netty/util/collection/IntObjectHashMapTest.java
index 7f50134ef85..ec291df8748 100644
--- a/common/src/test/java/io/netty/util/collection/IntObjectHashMapTest.java
+++ b/common/src/test/java/io/netty/util/collection/IntObjectHashMapTest.java
@@ -14,14 +14,21 @@
*/
package io.netty.util.collection;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
import org.junit.Before;
import org.junit.Test;
+import java.util.Arrays;
+import java.util.HashMap;
import java.util.HashSet;
+import java.util.Random;
import java.util.Set;
-import static org.junit.Assert.*;
-
/**
* Tests for {@link IntObjectHashMap}.
*/
@@ -279,4 +286,121 @@ public void valuesShouldBeReturned() {
}
assertEquals(expected, found);
}
+
+ @Test
+ public void mapShouldSupportHashingConflicts() {
+ for (int mod = 0; mod < 10; ++mod) {
+ for (int sz = 1; sz <= 101; sz += 2) {
+ IntObjectHashMap<String> map = new IntObjectHashMap<String>(sz);
+ for (int i = 0; i < 100; ++i) {
+ map.put(i * mod, "");
+ }
+ }
+ }
+ }
+
+ @Test
+ public void hashcodeEqualsTest() {
+ IntObjectHashMap<Integer> map1 = new IntObjectHashMap<Integer>();
+ IntObjectHashMap<Integer> map2 = new IntObjectHashMap<Integer>();
+ Random rnd = new Random(0);
+ while (map1.size() < 100) {
+ int key = rnd.nextInt(100);
+ map1.put(key, key);
+ map2.put(key, key);
+ }
+ assertEquals(map1.hashCode(), map2.hashCode());
+ assertTrue(map1.equals(map2));
+ // Remove one "middle" element, maps should now be non-equals.
+ int[] keys = map1.keys();
+ map2.remove(keys[50]);
+ assertFalse(map1.equals(map2));
+ // Put it back; will likely be in a different position, but maps will be equal again.
+ map2.put(keys[50], map1.keys()[50]);
+ assertTrue(map1.equals(map2));
+ assertEquals(map1.hashCode(), map2.hashCode());
+ // Make map2 have one extra element, will be non-equal.
+ map2.put(1000, 1000);
+ assertFalse(map1.equals(map2));
+ // Rebuild map2 with elements in a different order, again the maps should be equal.
+ // (These tests with same elements in different order also show that the hashCode
+ // function does not depend on the internal ordering of entries.)
+ map2.clear();
+ Arrays.sort(keys);
+ for (int key : keys) {
+ map2.put(key, key);
+ }
+ assertEquals(map1.hashCode(), map2.hashCode());
+ assertTrue(map1.equals(map2));
+ }
+
+ @Test
+ public void fuzzTest() {
+ // This test is so extremely internals-dependent that I'm not even trying to
+ // minimize that. Any internal changes will not fail the test (so it's not flaky per se)
+ // but will possibly make it less effective (not test interesting scenarios anymore).
+
+ // The RNG algorithm is specified and stable, so this will cause the same exact dataset
+ // to be used in every run and every JVM implementation.
+ Random rnd = new Random(0);
+
+ int baseSize = 1000;
+ // Empirically-determined size to expand the capacity exactly once, and before
+ // the step that creates the long conflict chain. We need to test rehash(),
+ // but also control when rehash happens because it cleans up the REMOVED entries.
+ // This size is also chosen so after the single rehash, the map will be densely
+ // populated, getting close to a second rehash but not triggering it.
+ int startTableSize = 1105;
+ IntObjectHashMap<Integer> map = new IntObjectHashMap<Integer>(startTableSize);
+ // Reference map which implementation we trust to be correct, will mirror all operations.
+ HashMap<Integer, Integer> goodMap = new HashMap<Integer, Integer>();
+
+ // Add initial population.
+ for (int i = 0; i < baseSize / 4; ++i) {
+ int key = rnd.nextInt(baseSize);
+ assertEquals(goodMap.put(key, key), map.put(key, key));
+ // 50% elements are multiple of a divisor of startTableSize => more conflicts.
+ key = rnd.nextInt(baseSize) * 17;
+ assertEquals(goodMap.put(key, key), map.put(key, key));
+ }
+
+ // Now do some mixed adds and removes for further fuzzing
+ // Rehash will happen here, but only once, and the final size will be closer to max.
+ for (int i = 0; i < baseSize * 1000; ++i) {
+ int key = rnd.nextInt(baseSize);
+ if (rnd.nextDouble() >= 0.2) {
+ assertEquals(goodMap.put(key, key), map.put(key, key));
+ } else {
+ assertEquals(goodMap.remove(key), map.remove(key));
+ }
+ }
+
+ // Final batch of fuzzing, only searches and removes.
+ int removeSize = map.size() / 2;
+ while (removeSize > 0) {
+ int key = rnd.nextInt(baseSize);
+ boolean found = goodMap.containsKey(key);
+ assertEquals(found, map.containsKey(key));
+ assertEquals(goodMap.remove(key), map.remove(key));
+ if (found) {
+ --removeSize;
+ }
+ }
+
+ // Now gotta write some code to compare the final maps, as equals() won't work.
+ assertEquals(goodMap.size(), map.size());
+ Integer[] goodKeys = goodMap.keySet().toArray(new Integer[goodMap.size()]);
+ Arrays.sort(goodKeys);
+ int [] keys = map.keys();
+ Arrays.sort(keys);
+ for (int i = 0; i < goodKeys.length; ++i) {
+ assertEquals((int) goodKeys[i], keys[i]);
+ }
+
+ // Finally drain the map.
+ for (int key : map.keys()) {
+ assertEquals(goodMap.remove(key), map.remove(key));
+ }
+ assertTrue(map.isEmpty());
+ }
}
| train | train | 2014-07-19T14:42:11 | 2014-07-15T23:01:40Z | opinali | val |
netty/netty/2694_2701 | netty/netty | netty/netty/2694 | netty/netty/2701 | [
"timestamp(timedelta=38.0, similarity=0.8639139103559901)"
] | 7e362277b93bbf4d59fd80485f3fe534a3ed8bba | 1e7af3d3d002abb0e74baa9f06aea8e9c9984e5b | [
"@nmittler can you take care ?\n",
"Sure thing ... created PR #2701 \n",
"@Scottmitch1 FYI, I think you only need to follow the netty commit message rules for pull requests, not issues. When I first looked at this I thought it was a PR :)\n",
"@nmittler - That makes sense because I also confused @normanmaure... | [] | 2014-07-23T16:12:36Z | [] | HTTP/2 Codec remove compressed from method signatures | **Motivation**
HTTP/2 Draft-13 removed the explicit per frame gzip compression support present in previous drafts. The per frame compression concept was partially removed from the code base as part of updating to Draft-13. The concept still exists in some classes and method signatures related to the outbound message flow code path.
**Modifications**
1. Removing the `compressed` member variable from the `io.netty.handler.codec.http2$Frame` class.
2. Removing the `compressed` boolean variable from method signatures which relate to Draft-12.
**Result**
Removal of legacy draft concepts from code base.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandler.java",
"codec-http2/src/main/... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandler.java",
"codec-http2/src/main/... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java
index 92f43b5d65b..fba729c9f5a 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler.java
@@ -338,7 +338,7 @@ protected final int nextStreamId() {
protected ChannelFuture writeData(final ChannelHandlerContext ctx,
final ChannelPromise promise, int streamId, final ByteBuf data, int padding,
- boolean endStream, boolean endSegment, boolean compressed) {
+ boolean endStream, boolean endSegment) {
try {
if (connection.isGoAway()) {
throw protocolError("Sending data after connection going away.");
@@ -349,7 +349,7 @@ protected ChannelFuture writeData(final ChannelHandlerContext ctx,
// Hand control of the frame to the flow controller.
outboundFlow.sendFlowControlled(streamId, data, padding, endStream, endSegment,
- compressed, new FlowControlWriter(ctx, data, promise));
+ new FlowControlWriter(ctx, data, promise));
return promise;
} catch (Http2Exception e) {
@@ -1094,7 +1094,7 @@ private final class FlowControlWriter implements Http2OutboundFlowController.Fra
@Override
public void writeFrame(int streamId, ByteBuf data, int padding,
- boolean endStream, boolean endSegment, boolean compressed) {
+ boolean endStream, boolean endSegment) {
if (promise.isDone()) {
// Most likely the write already failed. Just release the
// buffer.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
index c7caa4e70f5..b731c5817aa 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
@@ -149,10 +149,10 @@ public void updateOutboundWindowSize(int streamId, int delta) throws Http2Except
@Override
public void sendFlowControlled(int streamId, ByteBuf data, int padding, boolean endStream,
- boolean endSegment, boolean compressed, FrameWriter frameWriter) throws Http2Exception {
+ boolean endSegment, FrameWriter frameWriter) throws Http2Exception {
OutboundFlowState state = stateOrFail(streamId);
OutboundFlowState.Frame frame =
- state.newFrame(data, padding, endStream, endSegment, compressed, frameWriter);
+ state.newFrame(data, padding, endStream, endSegment, frameWriter);
int dataLength = data.readableBytes();
if (state.writableWindow() >= dataLength) {
@@ -442,8 +442,8 @@ int unallocatedPriorityBytes() {
* Creates a new frame with the given values but does not add it to the pending queue.
*/
Frame newFrame(ByteBuf data, int padding, boolean endStream, boolean endSegment,
- boolean compressed, FrameWriter writer) {
- return new Frame(data, padding, endStream, endSegment, compressed, writer);
+ FrameWriter writer) {
+ return new Frame(data, padding, endStream, endSegment, writer);
}
/**
@@ -530,17 +530,15 @@ private final class Frame {
private final int padding;
private final boolean endStream;
private final boolean endSegment;
- private final boolean compressed;
private final FrameWriter writer;
private boolean enqueued;
Frame(ByteBuf data, int padding, boolean endStream, boolean endSegment,
- boolean compressed, FrameWriter writer) {
+ FrameWriter writer) {
this.data = data;
this.padding = padding;
this.endStream = endStream;
this.endSegment = endSegment;
- this.compressed = compressed;
this.writer = writer;
}
@@ -580,7 +578,7 @@ void write() throws Http2Exception {
int dataLength = data.readableBytes();
connectionState().incrementStreamWindow(-dataLength);
incrementStreamWindow(-dataLength);
- writer.writeFrame(stream.id(), data, padding, endStream, endSegment, compressed);
+ writer.writeFrame(stream.id(), data, padding, endStream, endSegment);
decrementPendingBytes(dataLength);
}
@@ -606,7 +604,7 @@ void writeError(Http2Exception cause) {
Frame split(int maxBytes) {
// TODO: Should padding be included in the chunks or only the last frame?
maxBytes = min(maxBytes, data.readableBytes());
- Frame frame = new Frame(data.readSlice(maxBytes).retain(), 0, false, false, compressed, writer);
+ Frame frame = new Frame(data.readSlice(maxBytes).retain(), 0, false, false, writer);
decrementPendingBytes(maxBytes);
return frame;
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandler.java
index 433214c47af..a66f4b797ae 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandler.java
@@ -53,9 +53,8 @@ public DelegatingHttp2ConnectionHandler(Http2Connection connection, Http2FrameOb
@Override
public ChannelFuture writeData(ChannelHandlerContext ctx, ChannelPromise promise, int streamId,
- ByteBuf data, int padding, boolean endStream, boolean endSegment, boolean compressed) {
- return super.writeData(ctx, promise, streamId, data, padding, endStream, endSegment,
- compressed);
+ ByteBuf data, int padding, boolean endStream, boolean endSegment) {
+ return super.writeData(ctx, promise, streamId, data, padding, endStream, endSegment);
}
@Override
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java
index 57f3cc70c9c..4573e1eb0af 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java
@@ -31,7 +31,7 @@ interface FrameWriter {
* Writes a single data frame to the remote endpoint.
*/
void writeFrame(int streamId, ByteBuf data, int padding, boolean endStream,
- boolean endSegment, boolean compressed);
+ boolean endSegment);
/**
* Called if an error occurred before the write could take place. Sets the failure on the
@@ -80,10 +80,9 @@ void writeFrame(int streamId, ByteBuf data, int padding, boolean endStream,
* @param padding the number of bytes of padding to be added to the frame.
* @param endStream indicates whether this frames is to be the last sent on this stream.
* @param endSegment indicates whether this is to be the last frame in the segment.
- * @param compressed whether the data is compressed using gzip compression.
* @param frameWriter peforms to the write of the frame to the remote endpoint.
* @throws Http2Exception thrown if a protocol-related error occurred.
*/
void sendFlowControlled(int streamId, ByteBuf data, int padding, boolean endStream,
- boolean endSegment, boolean compressed, FrameWriter frameWriter) throws Http2Exception;
+ boolean endSegment, FrameWriter frameWriter) throws Http2Exception;
}
diff --git a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
index 3f60cb6f517..0f3aa522045 100644
--- a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
+++ b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
@@ -112,6 +112,6 @@ private void sendResponse(ChannelHandlerContext ctx, int streamId, ByteBuf paylo
Http2Headers headers = DefaultHttp2Headers.newBuilder().status("200").build();
writeHeaders(ctx(), ctx().newPromise(), streamId, headers, 0, false, false);
- writeData(ctx(), ctx().newPromise(), streamId, payload, 0, true, true, false);
+ writeData(ctx(), ctx().newPromise(), streamId, payload, 0, true, true);
}
}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
index 5221fa99520..e96c372ee7c 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
@@ -542,22 +542,21 @@ public void samePriorityShouldWriteEqualData() throws Http2Exception {
}
private void send(int streamId, ByteBuf data) throws Http2Exception {
- controller.sendFlowControlled(streamId, data, 0, false, false, false, frameWriter);
+ controller.sendFlowControlled(streamId, data, 0, false, false, frameWriter);
}
private void verifyWrite(int streamId, ByteBuf data) {
- verify(frameWriter).writeFrame(eq(streamId), eq(data), eq(0), eq(false), eq(false),
- eq(false));
+ verify(frameWriter).writeFrame(eq(streamId), eq(data), eq(0), eq(false), eq(false));
}
private void verifyNoWrite(int streamId) {
verify(frameWriter, never()).writeFrame(eq(streamId), any(ByteBuf.class), anyInt(),
- anyBoolean(), anyBoolean(), anyBoolean());
+ anyBoolean(), anyBoolean());
}
private void captureWrite(int streamId, ArgumentCaptor<ByteBuf> captor, boolean endStream) {
verify(frameWriter).writeFrame(eq(streamId), captor.capture(), eq(0), eq(endStream),
- eq(false), eq(false));
+ eq(false));
}
private void setPriority(int stream, int parent, int weight, boolean exclusive)
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java
index b92ba8ade9d..920b4040436 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DelegatingHttp2ConnectionHandlerTest.java
@@ -442,15 +442,15 @@ public void goAwayShouldReadShouldUpdateConnectionState() throws Exception {
@Test
public void dataWriteAfterGoAwayShouldFail() throws Exception {
when(connection.isGoAway()).thenReturn(true);
- ChannelFuture future = handler.writeData(ctx, promise, STREAM_ID, dummyData(), 0, false, false, false);
+ ChannelFuture future = handler.writeData(ctx, promise, STREAM_ID, dummyData(), 0, false, false);
assertTrue(future.awaitUninterruptibly().cause() instanceof Http2Exception);
}
@Test
public void dataWriteShouldSucceed() throws Exception {
- handler.writeData(ctx, promise, STREAM_ID, dummyData(), 0, false, false, false);
+ handler.writeData(ctx, promise, STREAM_ID, dummyData(), 0, false, false);
verify(outboundFlow).sendFlowControlled(eq(STREAM_ID), eq(dummyData()), eq(0), eq(false),
- eq(false), eq(false), any(Http2OutboundFlowController.FrameWriter.class));
+ eq(false), any(Http2OutboundFlowController.FrameWriter.class));
}
@Test
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
index 7ffd9e5ef99..098495da8d0 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
@@ -121,7 +121,7 @@ public void run() {
http2Client.writePing(ctx(), newPromise(), Unpooled.copiedBuffer(pingMsg.getBytes()));
http2Client.writeData(
ctx(), newPromise(), nextStream,
- Unpooled.copiedBuffer(text.getBytes()), 0, true, true, false);
+ Unpooled.copiedBuffer(text.getBytes()), 0, true, true);
}
}
});
| train | train | 2014-07-22T22:32:01 | 2014-07-22T20:40:40Z | Scottmitch | val |
netty/netty/2718_2725 | netty/netty | netty/netty/2718 | netty/netty/2725 | [
"timestamp(timedelta=17.0, similarity=0.8508248784792203)"
] | 7791fbf0e2c91616c51a4ddc7a0e7e79788df441 | 4986b7600e332e836b89a7f911da7bbef42f3a46 | [
"It's a bug. Key password must be respected. I would be more than happy if you are coming up with a patch!\n",
"And, here's the ICLA form: https://docs.google.com/spreadsheet/viewform?formkey=dHBjc1YzdWhsZERUQnhlSklsbG1KT1E6MQ\n",
"Alright, I filled in the ICLA form and will do my best to help.\n"
] | [
"Why not make this class final and add a private constructor? Seems like it only contains a static method anyway.\n",
"Add empty line \n",
"Also I think we can make it package private\n",
"After thinking a bit more about it maybe we could also just merge it into `JdkSslServerContext`. Or is there any reasons ... | 2014-08-02T10:47:22Z | [
"defect"
] | JdkSslContext ignores key password | I'm using **5.0.0.Alpha2-SNAPSHOT** with java 1.7.0_51 on OS X 10.9.3.
When `JdkSslContext` is initialized it creates a `java.security.spec.PKCS8EncodedKeySpec` from the `byte[]` read by `PemReader` which is passed to a `java.security.KeyFactory` in order to obtain a `java.security.PrivateKey`. This only works if the contents of the key file are **not ecrypted**. Otherwise this fails with an exception like `java.security.InvalidKeyException: IOException : DER input, Integer tag error`.
I'd expect that if a password is passed the contents of the key file are be considered encrypted and therefore have to be decrypted first ([see this answer](http://stackoverflow.com/a/3985508)).
Unless this is somehow intended, I'd try to come up with a patch.
| [
"handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java"
] | [
"handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java",
"handler/src/test/resources/io/netty/handler/ssl/netty_test",
"handler/src/test/resources/io/netty/handler/ssl/netty_test.crt",
"handler/src/test/resources/io/netty/handler/ssl/netty_test_unencrypted"
] | diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
index 6fcd2445215..92ba7fa74b8 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
@@ -13,19 +13,15 @@
* License for the specific language governing permissions and limitations
* under the License.
*/
-
package io.netty.handler.ssl;
-import io.netty.buffer.ByteBuf;
-import io.netty.buffer.ByteBufInputStream;
-
-import javax.net.ssl.KeyManagerFactory;
-import javax.net.ssl.SSLContext;
-import javax.net.ssl.SSLException;
-import javax.net.ssl.SSLSessionContext;
import java.io.File;
+import java.io.IOException;
+import java.security.InvalidAlgorithmParameterException;
+import java.security.InvalidKeyException;
import java.security.KeyFactory;
import java.security.KeyStore;
+import java.security.NoSuchAlgorithmException;
import java.security.PrivateKey;
import java.security.Security;
import java.security.cert.Certificate;
@@ -36,6 +32,20 @@
import java.util.Collections;
import java.util.List;
+import javax.crypto.Cipher;
+import javax.crypto.EncryptedPrivateKeyInfo;
+import javax.crypto.NoSuchPaddingException;
+import javax.crypto.SecretKey;
+import javax.crypto.SecretKeyFactory;
+import javax.crypto.spec.PBEKeySpec;
+import javax.net.ssl.KeyManagerFactory;
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.SSLException;
+import javax.net.ssl.SSLSessionContext;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufInputStream;
+
/**
* A server-side {@link SslContext} which uses JDK's SSL/TLS implementation.
*/
@@ -59,8 +69,7 @@ public JdkSslServerContext(File certChainFile, File keyFile) throws SSLException
*
* @param certChainFile an X.509 certificate chain file in PEM format
* @param keyFile a PKCS#8 private key file in PEM format
- * @param keyPassword the password of the {@code keyFile}.
- * {@code null} if it's not password-protected.
+ * @param keyPassword the password of the {@code keyFile}. {@code null} if it's not password-protected.
*/
public JdkSslServerContext(File certChainFile, File keyFile, String keyPassword) throws SSLException {
this(certChainFile, keyFile, keyPassword, null, null, 0, 0);
@@ -71,16 +80,15 @@ public JdkSslServerContext(File certChainFile, File keyFile, String keyPassword)
*
* @param certChainFile an X.509 certificate chain file in PEM format
* @param keyFile a PKCS#8 private key file in PEM format
- * @param keyPassword the password of the {@code keyFile}.
- * {@code null} if it's not password-protected.
- * @param ciphers the cipher suites to enable, in the order of preference.
- * {@code null} to use the default cipher suites.
- * @param nextProtocols the application layer protocols to accept, in the order of preference.
- * {@code null} to disable TLS NPN/ALPN extension.
- * @param sessionCacheSize the size of the cache used for storing SSL session objects.
- * {@code 0} to use the default value.
- * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
- * {@code 0} to use the default value.
+ * @param keyPassword the password of the {@code keyFile}. {@code null} if it's not password-protected.
+ * @param ciphers the cipher suites to enable, in the order of preference. {@code null} to use the default cipher
+ * suites.
+ * @param nextProtocols the application layer protocols to accept, in the order of preference. {@code null} to
+ * disable TLS NPN/ALPN extension.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects. {@code 0} to use the default
+ * value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds. {@code 0} to use the default
+ * value.
*/
public JdkSslServerContext(
File certChainFile, File keyFile, String keyPassword,
@@ -106,7 +114,7 @@ public JdkSslServerContext(
}
List<String> list = new ArrayList<String>();
- for (String p: nextProtocols) {
+ for (String p : nextProtocols) {
if (p == null) {
break;
}
@@ -133,7 +141,9 @@ public JdkSslServerContext(
ByteBuf encodedKeyBuf = PemReader.readPrivateKey(keyFile);
byte[] encodedKey = new byte[encodedKeyBuf.readableBytes()];
encodedKeyBuf.readBytes(encodedKey).release();
- PKCS8EncodedKeySpec encodedKeySpec = new PKCS8EncodedKeySpec(encodedKey);
+
+ char[] keyPasswordChars = keyPassword.toCharArray();
+ PKCS8EncodedKeySpec encodedKeySpec = generateKeySpec(keyPasswordChars, encodedKey);
PrivateKey key;
try {
@@ -145,20 +155,20 @@ public JdkSslServerContext(
List<Certificate> certChain = new ArrayList<Certificate>();
ByteBuf[] certs = PemReader.readCertificates(certChainFile);
try {
- for (ByteBuf buf: certs) {
+ for (ByteBuf buf : certs) {
certChain.add(cf.generateCertificate(new ByteBufInputStream(buf)));
}
} finally {
- for (ByteBuf buf: certs) {
+ for (ByteBuf buf : certs) {
buf.release();
}
}
- ks.setKeyEntry("key", key, keyPassword.toCharArray(), certChain.toArray(new Certificate[certChain.size()]));
+ ks.setKeyEntry("key", key, keyPasswordChars, certChain.toArray(new Certificate[certChain.size()]));
// Set up key manager factory to use our key store
KeyManagerFactory kmf = KeyManagerFactory.getInstance(algorithm);
- kmf.init(ks, keyPassword.toCharArray());
+ kmf.init(ks, keyPasswordChars);
// Initialize the SSLContext to work with our key managers.
ctx = SSLContext.getInstance(PROTOCOL);
@@ -190,4 +200,36 @@ public List<String> nextProtocols() {
public SSLContext context() {
return ctx;
}
+
+ /**
+ * Generates a key specification for an (encrypted) private key.
+ *
+ * @param password characters, if {@code null} or empty an unencrypted key is assumed
+ * @param key bytes of the DER encoded private key
+ * @return a key specification
+ * @throws IOException if parsing {@code key} fails
+ * @throws NoSuchAlgorithmException if the algorithm used to encrypt {@code key} is unkown
+ * @throws NoSuchPaddingException if the padding scheme specified in the decryption algorithm is unkown
+ * @throws InvalidKeySpecException if the decryption key based on {@code password} cannot be generated
+ * @throws InvalidKeyException if the decryption key based on {@code password} cannot be used to decrypt {@code key}
+ * @throws InvalidAlgorithmParameterException if decryption algorithm parameters are somehow faulty
+ */
+ private static PKCS8EncodedKeySpec generateKeySpec(char[] password, byte[] key) throws IOException,
+ NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeySpecException, InvalidKeyException,
+ InvalidAlgorithmParameterException {
+
+ if (password == null || password.length == 0) {
+ return new PKCS8EncodedKeySpec(key);
+ }
+
+ EncryptedPrivateKeyInfo encryptedPrivateKeyInfo = new EncryptedPrivateKeyInfo(key);
+ SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(encryptedPrivateKeyInfo.getAlgName());
+ PBEKeySpec pbeKeySpec = new PBEKeySpec(password);
+ SecretKey pbeKey = keyFactory.generateSecret(pbeKeySpec);
+
+ Cipher cipher = Cipher.getInstance(encryptedPrivateKeyInfo.getAlgName());
+ cipher.init(Cipher.DECRYPT_MODE, pbeKey, encryptedPrivateKeyInfo.getAlgParameters());
+
+ return encryptedPrivateKeyInfo.getKeySpec(cipher);
+ }
}
| diff --git a/handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java b/handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java
new file mode 100644
index 00000000000..eb0863002ad
--- /dev/null
+++ b/handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java
@@ -0,0 +1,47 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.netty.handler.ssl;
+
+import java.io.File;
+
+import javax.net.ssl.SSLException;
+
+import org.junit.Test;
+
+/**
+ * Tests for JDK SSL Server Context.
+ */
+public class JdkSslServerContextTest {
+
+ @Test
+ public void testJdkSslServerWithEncryptedPrivateKey() throws SSLException {
+ File keyFile = new File(getClass().getResource("netty_test").getFile());
+ File crtFile = new File(getClass().getResource("netty_test.crt").getFile());
+
+ new JdkSslServerContext(crtFile, keyFile, "12345");
+ }
+
+ @Test
+ public void testJdkSslServerWithUnencryptedPrivateKey() throws SSLException {
+ File keyFile = new File(getClass().getResource("netty_test_unencrypted").getFile());
+ File crtFile = new File(getClass().getResource("netty_test.crt").getFile());
+
+ new JdkSslServerContext(crtFile, keyFile, "");
+ new JdkSslServerContext(crtFile, keyFile, null);
+ }
+
+}
diff --git a/handler/src/test/resources/io/netty/handler/ssl/netty_test b/handler/src/test/resources/io/netty/handler/ssl/netty_test
new file mode 100644
index 00000000000..58d181e1161
--- /dev/null
+++ b/handler/src/test/resources/io/netty/handler/ssl/netty_test
@@ -0,0 +1,29 @@
+-----BEGIN ENCRYPTED PRIVATE KEY-----
+MIIE9jAoBgoqhkiG9w0BDAEDMBoEFDBlaUwB8TQ9ImbApCmAyVRTTX+kAgIIAASC
+BMhC8QFNyn0VbVp7I+R9Yvmr+Ksl0xZshGg3zaUN8/HRblNSS3gPiP673rmnhcU3
+PfSNFR9hOrTqdtd5i6Qq4HznECs81KBlqRNB9ihgy++ByFkf6GTzdfBA6zJInhNx
+qSWjUwpFtV4or1w/N23bTcpdGmjfdCSFBMQdbkIDgT7GaWxd3mCLxSbfVzF64tev
+x+V22nA/TR0VWnG+aj7aVbReK6VpepiCX7ZmQ5KehXAeB0SDrgT89kcz2VIfDxvE
+hkCymNTcJY/ETdPfTSiR+DSZvVJMgVmfk7j1toZZSnoMwl4IhlXmIPmDOUE465l3
+sNWLygkNKymTmMI5FTT1hChAIdsmeVTfDmVzNPK4HQi5gfEnTCy0uxj9U3HCZWr1
+Zlzmw7/430TRqNYSEJ/XkhFaV5V+6LfeZOyuwf2VJAs+CwNo+UYzEQqkW11JMqhA
+i9fz8bCNoy4/dyWbE/wEK8UPGif1rzCpoodBYeWTt0QtHcIokE3ylXWyTTarz7jV
+u9Rnbq4HAXYYEwPjLmWFQ6NeD/rx/t44oEAyekxS+ZPIHNTVXRLBH5Tl/LDkpK15
+x0FoIZ0vrDiFbmtHCq/TeDyFtudSbmihnn0Of6PtXKZJpXgEADQBnak/P4IE39/d
+1hWd3H635goC6OkqHv9IAAyLlCNZCOVqC5Wa8TvyZdaKi5A2mZfGrpxPrUQDlnqN
+8d3xlysNCaRH1hSMw4hGHu0xxGJaK4DQtklxfZB7IMMw5MkQh6Rim5TOXfopmzmK
+PISJge1atiHbVIBP6sr3Egik3h6v0j7xXVmwj3UUQRaSBznZ43ShlYieLnin9sh8
+x/gLyvQrtJRvScN6skgrXFKVH3Jojxut9if64jjLo4C61UgNrvuka05treRTI+jT
+hHB3GLy7hwSHnbsOvwvYbG3WgyePPq6jIM+LV4Vm3fPX6NPNI/jZMebROGwjTL0C
+2403yvgeIpEOQyZpKsDBqAwgKB91Na53K05qGSbr8AgcZvgFflJdLzai+5Cg7hNg
+YTEff0NKPeYnk4u3xQ8EqxI2jwdqfgzd0RcPcx60CHRBTULaKOU2sAYTSpwQmApj
++TnJNcQnWRAEcZ35b/b+oGlVH/BUmvjSdu2qvvU3g4GoHL7MuVGvzk0Cgo1Esktt
+S6gO/pTQPaKGJ1ztxoHu2zzi7/URaus3sqI5qV9krWMSa35BMG21Eik/y9rou6LC
+yT0EtMLOCxSrfM1I26XTU/7qPIEJlVZg0CJ39niZ7EEm1Hef0cmT8Aq9t5cRTyvR
+BqbqBCJpcsgeIZUMH6RJ1zv616eJvY7wjd13Sl0Tbj9+nNS482D9PIlaXSD8UySh
+mZ0bMPhCeyOsmRmz2qT1X+Zct8XtdXc/NPKBA6rnOtH8vJAHn7S120le5XIn5t9l
+rDiO1Hozhb+0xcTk+SNc/vIORA6KrBoZrNpJpmyL3BzRp+/VLbR+/S3ikTDkYj7J
+sktK2ap6vK7u50Jnrt9C/wynVACzGx1tlDVxiVerDmwjfQWL08qCXHlouEdjh9dD
+L5XyVlT2FxEXXLRgKGHxFaSQw3Fzzug/o4SgizbNjKffJU5xQlC0aq3WX5+/l3Ic
+LWTalgdli3edsR/9RGuu8EsZ11dmNh3csGs=
+-----END ENCRYPTED PRIVATE KEY-----
diff --git a/handler/src/test/resources/io/netty/handler/ssl/netty_test.crt b/handler/src/test/resources/io/netty/handler/ssl/netty_test.crt
new file mode 100644
index 00000000000..7d3437e13e2
--- /dev/null
+++ b/handler/src/test/resources/io/netty/handler/ssl/netty_test.crt
@@ -0,0 +1,19 @@
+-----BEGIN CERTIFICATE-----
+MIIC/jCCAeagAwIBAgIIIMONxElm0AIwDQYJKoZIhvcNAQELBQAwPjE8MDoGA1UE
+AwwzZThhYzAyZmEwZDY1YTg0MjE5MDE2MDQ1ZGI4YjA1YzQ4NWI0ZWNkZi5uZXR0
+eS50ZXN0MCAXDTEzMDgwMjA3NTEzNloYDzk5OTkxMjMxMjM1OTU5WjA+MTwwOgYD
+VQQDDDNlOGFjMDJmYTBkNjVhODQyMTkwMTYwNDVkYjhiMDVjNDg1YjRlY2RmLm5l
+dHR5LnRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDb+HBO3C0U
+RBKvDUgJHbhIlBye8X/cbNH3lDq3XOOFBz7L4XZKLDIXS+FeQqSAUMo2otmU+Vkj
+0KorshMjbUXfE1KkTijTMJlaga2M2xVVt21fRIkJNWbIL0dWFLWyRq7OXdygyFkI
+iW9b2/LYaePBgET22kbtHSCAEj+BlSf265+1rNxyAXBGGGccCKzEbcqASBKHOgVp
+6pLqlQAfuSy6g/OzGzces3zXRrGu1N3pBIzAIwCW429n52ZlYfYR0nr+REKDnRrP
+IIDsWASmEHhBezTD+v0qCJRyLz2usFgWY+7agUJE2yHHI2mTu2RAFngBilJXlMCt
+VwT0xGuQxkbHAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAEv8N7Xm8qaY2FgrOc6P
+a1GTgA+AOb3aU33TGwAR86f+nLf6BSPaohcQfOeJid7FkFuYInuXl+oqs+RqM/j8
+R0E5BuGYY2wOKpL/PbFi1yf/Kyvft7KVh8e1IUUec/i1DdYTDB0lNWvXXxjfMKGL
+ct3GMbEHKvLfHx42Iwz/+fva6LUrO4u2TDfv0ycHuR7UZEuC1DJ4xtFhbpq/QRAj
+CyfNx3cDc7L2EtJWnCmivTFA9l8MF1ZPMDSVd4ecQ7B0xZIFQ5cSSFt7WGaJCsGM
+zYkU4Fp4IykQcWxdlNX7wJZRwQ2TZJFFglpTiFZdeq6I6Ad9An1Encpz5W8UJ4tv
+hmw=
+-----END CERTIFICATE-----
diff --git a/handler/src/test/resources/io/netty/handler/ssl/netty_test_unencrypted b/handler/src/test/resources/io/netty/handler/ssl/netty_test_unencrypted
new file mode 100644
index 00000000000..608e7f4da0f
--- /dev/null
+++ b/handler/src/test/resources/io/netty/handler/ssl/netty_test_unencrypted
@@ -0,0 +1,24 @@
+-----BEGIN PRIVATE KEY-----
+MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDb+HBO3C0URBKvDUgJHbhIlBye
+8X/cbNH3lDq3XOOFBz7L4XZKLDIXS+FeQqSAUMo2otmU+Vkj0KorshMjbUXfE1KkTijTMJlaga2M
+2xVVt21fRIkJNWbIL0dWFLWyRq7OXdygyFkIiW9b2/LYaePBgET22kbtHSCAEj+BlSf265+1rNxy
+AXBGGGccCKzEbcqASBKHOgVp6pLqlQAfuSy6g/OzGzces3zXRrGu1N3pBIzAIwCW429n52ZlYfYR
+0nr+REKDnRrPIIDsWASmEHhBezTD+v0qCJRyLz2usFgWY+7agUJE2yHHI2mTu2RAFngBilJXlMCt
+VwT0xGuQxkbHAgMBAAECggEBAJJdKaVfXWNptCDkLnVaYB9y5eRgfppVkhQxfiw5023Vl1QjrgjG
+hYH4zHli0IBMwXA/RZWZoFVzZ3dxoshk0iQPgGKxWvrDEJcnSCo8MGL7jPvh52jILp6uzsGZQBji
+bTgFPmOBS7ShdgZiQKD9PD2psrmqHZ1yTwjIm5cGfzQM8Y6tjm0xLBn676ecJNdS1TL10y9vmSUM
+Ofdkmeg9Z9TEK95lP2fF/NIcxCo0LF9JcHUvTuYBDnBH0XMZi0w0ZcRReMSdAZ2lLiXgBeCO53el
+2NIrtkRx+qOvLua9UfwO2h/0rs66ZeV0YuFCjv067nytyZf2zhU/QbCHRypzfrkCgYEA/facuAJs
+6MQKsNvhozoBeDRMkrZPMh8Sb0w50EqzIGz3pdms6UvCiggoMbhxKOwuYWZ689fBPGwm7x0RdwDO
+jyUuEbFnQFe+CpdHy6VK7vIQed1SwAcdTMDwCYbkJNglqHEB7qUYYTFLr8okGyWVdthUoh4IAubU
+TR3TFbGraDUCgYEA3bwJ/UNA5pHtb/nh4/dNL7/bRMwXyPZPpC5z+gjjgUMgsSRBz8+iPNTB4iSQ
+1j9zm+pnXGi35zWZcI4jvIcFusb08eS7xcZDb+7X2r2wenLNmyuTOa1812y233FicU+ah91fa9aD
+yUfTjj3GFawbgNNhMyWa3aEMV+c73t6sKosCgYEA35oQZhsMlOx2lT0jrzlVLeauPMZzeCfPbVrp
+1DDRAg2vBcFf8pCXmjyQVyaTy3oXY/585tDh/DclGIa5Z9O4CmSr6TwPMqGOW3jS58SC81sBkqqB
+Pz2EWJ3POjQgDyiYD3RgRSPrETf78azCmXw/2sGh0pMqbpOZ/MPzpDgoOLkCgYEAsdv4g09kCs75
+Dz34hRzErE2P+8JePdPdlEuyudhRbUlEOvNjWucpMvRSRSyhhUnGWUWP/V7+TRcAanmJjtsbrHOU
+3Udlm0HqrCmAubQ4kC/wXsx4Pua7Yi2RDvBrT4rT4LGgreaXNWhI+Srx7kZslUx5Bkbez3I0bXpM
+2vvwS/sCgYAducNt1KC4W7jzMWUivvuy5hQQmX/G0JHtu1pfv9cmA8agnc1I/r7xoirftuSG25Pm
+r+eP5SKbKb8ZQlp10JeBkNnk8eAG8OkQyBaECYDBadEr1/LK2LmIEjYKzKAjYQ4cX2KMtY271jjX
+WrzzXNqBdThFfMHiJE8k9xYmaLDKhQ==
+-----END PRIVATE KEY-----
| val | train | 2014-08-04T08:01:19 | 2014-07-31T22:06:37Z | schulzp | val |
netty/netty/2693_2729 | netty/netty | netty/netty/2693 | netty/netty/2729 | [
"timestamp(timedelta=62530.0, similarity=0.8478815462737131)"
] | d0e4a8583031e57bbd43700dce0e336d088a2bea | d400afd56b37efdda2254f95a787f2d790675d39 | [
"[This build](https://secure.motd.kr/jenkins/job/netty-4.0/4803/) reduces the initial capacity of `ChannelOutboundBuffer` to 4. (commit: 7e362277b93bbf4d59fd80485f3fe534a3ed8bba)\n",
"@trustin I think we can close this ?\n",
"Nope. The second item was not resolved yet.\n",
"Ah got it\n\n> Am 23.07.2014 um 22:... | [
"This line should not be changed.\n",
"If this is not public, a user who extends `AbstractChannel` (and consequently `AbstractUnsafe`) will not be able to access the return value of `AbstractUnsafe.outboundBuffer()`.\n",
"Any reason this is not part of the interface?\n"
] | 2014-08-04T10:17:53Z | [
"improvement"
] | Reduce the memory consumption of ChannelOutboundBuffer | The initial capacity of a `ChannelOutboundBuffer` is 32, and it means each `ChannelOutboundBuffer` will end up with 32 entry objects as a channel lives on, even if the channel's maximum number of pending write requests is always 1.
We could do a few things to fix this:
- Reduce the initial capacity to a smaller value such as 4.
- Recycle entry objects or do something similar (we don't need thread local here obviously.)
http://stackoverflow.com/questions/24726973/netty-4-0-19-final-memory-leak-with-io-netty-channel-channeloutboundbufferentry
| [
"transport/src/main/java/io/netty/channel/AbstractChannel.java",
"transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java"
] | [
"transport/src/main/java/io/netty/channel/AbstractChannel.java",
"transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java"
] | [
"transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/AbstractChannel.java b/transport/src/main/java/io/netty/channel/AbstractChannel.java
index 5642f226554..e270a98acb7 100644
--- a/transport/src/main/java/io/netty/channel/AbstractChannel.java
+++ b/transport/src/main/java/io/netty/channel/AbstractChannel.java
@@ -83,7 +83,7 @@ protected AbstractChannel(Channel parent) {
@Override
public boolean isWritable() {
ChannelOutboundBuffer buf = unsafe.outboundBuffer();
- return buf != null && buf.getWritable();
+ return buf != null && buf.isWritable();
}
@Override
@@ -649,7 +649,11 @@ public void write(Object msg, ChannelPromise promise) {
ReferenceCountUtil.release(msg);
return;
}
- outboundBuffer.addMessage(msg, promise);
+ int size = estimatorHandle().size(msg);
+ if (size < 0) {
+ size = 0;
+ }
+ outboundBuffer.addMessage(msg, size, promise);
}
@Override
diff --git a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java
index 899ca3b289c..e0508bdb5d7 100644
--- a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java
+++ b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java
@@ -40,6 +40,8 @@
/**
* (Transport implementors only) an internal data structure used by {@link AbstractChannel} to store its pending
* outbound write requests.
+ *
+ * All the methods should only be called by the {@link EventLoop} of the {@link Channel}.
*/
public final class ChannelOutboundBuffer {
@@ -61,7 +63,7 @@ protected ByteBuffer[] initialValue() throws Exception {
}
};
- private final AbstractChannel channel;
+ private final Channel channel;
// Entry(flushedEntry) --> ... Entry(unflushedEntry) --> ... Entry(tailEntry)
//
@@ -109,12 +111,11 @@ protected ByteBuffer[] initialValue() throws Exception {
this.channel = channel;
}
- void addMessage(Object msg, ChannelPromise promise) {
- int size = channel.estimatorHandle().size(msg);
- if (size < 0) {
- size = 0;
- }
-
+ /**
+ * Add given message to this {@link ChannelOutboundBuffer}. The given {@link ChannelPromise} will be notified once
+ * the message was written.
+ */
+ public void addMessage(Object msg, int size, ChannelPromise promise) {
Entry entry = Entry.newInstance(msg, size, total(msg), promise);
if (tailEntry == null) {
flushedEntry = null;
@@ -133,7 +134,11 @@ void addMessage(Object msg, ChannelPromise promise) {
incrementPendingOutboundBytes(size);
}
- void addFlush() {
+ /**
+ * Add a flush to this {@link ChannelOutboundBuffer}. This means all previous added messages are marked as flushed
+ * and so you will be able to handle them.
+ */
+ public void addFlush() {
// There is no need to process all entries if there was already a flush before and no new messages
// where added in the meantime.
//
@@ -206,10 +211,18 @@ private static long total(Object msg) {
return -1;
}
+ /**
+ * Return the current message to write or {@code null} if nothing was flushed before and so is ready to be written.
+ */
public Object current() {
return current(true);
}
+ /**
+ * Return the current message to write or {@code null} if nothing was flushed before and so is ready to be written.
+ * If {@code true} is specified a direct {@link ByteBuf} or {@link ByteBufHolder} is prefered and
+ * so the current message may be copied into a direct buffer.
+ */
public Object current(boolean preferDirect) {
// TODO: Think of a smart way to handle ByteBufHolder messages
Entry entry = flushedEntry;
@@ -250,7 +263,7 @@ public Object current(boolean preferDirect) {
/**
* Replace the current msg with the given one.
- * The replaced msg will automatically be released
+ * {@link ReferenceCountUtil#release(Object)} will automatically be called on the replaced message.
*/
public void current(Object msg) {
Entry entry = flushedEntry;
@@ -259,6 +272,9 @@ public void current(Object msg) {
entry.msg = msg;
}
+ /**
+ * Notify the {@link ChannelPromise} of the current message about writing progress.
+ */
public void progress(long amount) {
Entry e = flushedEntry;
assert e != null;
@@ -270,6 +286,11 @@ public void progress(long amount) {
}
}
+ /**
+ * Will remove the current message, mark its {@link ChannelPromise} as success and return {@code true}. If no
+ * flushed message exists at the time this method is called it will return {@code false} to signal that no more
+ * messages are ready to be handled.
+ */
public boolean remove() {
Entry e = flushedEntry;
if (e == null) {
@@ -295,6 +316,11 @@ public boolean remove() {
return true;
}
+ /**
+ * Will remove the current message, mark its {@link ChannelPromise} as failure using the given {@link Throwable}
+ * and return {@code true}. If no flushed message exists at the time this method is called it will return
+ * {@code false} to signal that no more messages are ready to be handled.
+ */
public boolean remove(Throwable cause) {
Entry e = flushedEntry;
if (e == null) {
@@ -366,9 +392,8 @@ public void removeBytes(long writtenBytes) {
/**
* Returns an array of direct NIO buffers if the currently pending messages are made of {@link ByteBuf} only.
- * {@code null} is returned otherwise. If this method returns a non-null array, {@link #nioBufferCount()} and
- * {@link #nioBufferSize()} will return the number of NIO buffers in the returned array and the total number
- * of readable bytes of the NIO buffers respectively.
+ * {@link #nioBufferCount()} and {@link #nioBufferSize()} will return the number of NIO buffers in the returned
+ * array and the total number of readable bytes of the NIO buffers respectively.
* <p>
* Note that the returned array is reused and thus should not escape
* {@link AbstractChannel#doWrite(ChannelOutboundBuffer)}.
@@ -479,22 +504,39 @@ private static ByteBuffer[] expandNioBufferArray(ByteBuffer[] array, int neededS
return newArray;
}
+ /**
+ * Returns the number of {@link ByteBuffer} that can be written out of the {@link ByteBuffer} array that was
+ * obtained via {@link #nioBuffers()}. This method <strong>MUST</strong> be called after {@link #nioBuffers()}
+ * was called.
+ */
public int nioBufferCount() {
return nioBufferCount;
}
+ /**
+ * Returns the number of bytes that can be written out of the {@link ByteBuffer} array that was
+ * obtained via {@link #nioBuffers()}. This method <strong>MUST</strong> be called after {@link #nioBuffers()}
+ * was called.
+ */
public long nioBufferSize() {
return nioBufferSize;
}
- boolean getWritable() {
+ boolean isWritable() {
return writable != 0;
}
+ /**
+ * Returns the number of flushed messages in this {@link ChannelOutboundBuffer}.
+ */
public int size() {
return flushed;
}
+ /**
+ * Returns {@code true} if there are flushed messages in this {@link ChannelOutboundBuffer} or {@code false}
+ * otherwise.
+ */
public boolean isEmpty() {
return flushed == 0;
}
| diff --git a/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java b/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java
index 11c0f6a1415..296a0e7d131 100644
--- a/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java
+++ b/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java
@@ -51,7 +51,7 @@ public void testNioBuffersSingleBacked() {
ByteBuf buf = copiedBuffer("buf1", CharsetUtil.US_ASCII);
ByteBuffer nioBuf = buf.internalNioBuffer(0, buf.readableBytes());
- buffer.addMessage(buf, channel.voidPromise());
+ buffer.addMessage(buf, buf.readableBytes(), channel.voidPromise());
assertEquals("Should still be 0 as not flushed yet", 0, buffer.nioBufferCount());
buffer.addFlush();
ByteBuffer[] buffers = buffer.nioBuffers();
@@ -75,7 +75,7 @@ public void testNioBuffersExpand() {
ByteBuf buf = directBuffer().writeBytes("buf1".getBytes(CharsetUtil.US_ASCII));
for (int i = 0; i < 64; i++) {
- buffer.addMessage(buf.copy(), channel.voidPromise());
+ buffer.addMessage(buf.copy(), buf.readableBytes(), channel.voidPromise());
}
assertEquals("Should still be 0 as not flushed yet", 0, buffer.nioBufferCount());
buffer.addFlush();
@@ -99,7 +99,7 @@ public void testNioBuffersExpand2() {
for (int i = 0; i < 65; i++) {
comp.addComponent(buf.copy()).writerIndex(comp.writerIndex() + buf.readableBytes());
}
- buffer.addMessage(comp, channel.voidPromise());
+ buffer.addMessage(comp, comp.readableBytes(), channel.voidPromise());
assertEquals("Should still be 0 as not flushed yet", 0, buffer.nioBufferCount());
buffer.addFlush();
| train | train | 2014-08-04T19:53:39 | 2014-07-22T18:53:12Z | trustin | val |
netty/netty/2709_2729 | netty/netty | netty/netty/2709 | netty/netty/2729 | [
"timestamp(timedelta=89.0, similarity=0.893472531598197)"
] | d0e4a8583031e57bbd43700dce0e336d088a2bea | d400afd56b37efdda2254f95a787f2d790675d39 | [
"Fixed in 3f3e66c31ae3da70c36cc125ca9bcac8215390e4\n"
] | [
"This line should not be changed.\n",
"If this is not public, a user who extends `AbstractChannel` (and consequently `AbstractUnsafe`) will not be able to access the return value of `AbstractUnsafe.outboundBuffer()`.\n",
"Any reason this is not part of the interface?\n"
] | 2014-08-04T10:17:53Z | [
"defect"
] | ChannelOutboundBuffer only be usable via AbstractChannel | We expose `ChannelOuboundBuffer` in `Channel.Unsafe` but make it impossible to create one instance without the usage of `AbstractChannel`. This makes it impossible to write a `Channel` implementation that not extends `AbstractChannel`.
| [
"transport/src/main/java/io/netty/channel/AbstractChannel.java",
"transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java"
] | [
"transport/src/main/java/io/netty/channel/AbstractChannel.java",
"transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java"
] | [
"transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/AbstractChannel.java b/transport/src/main/java/io/netty/channel/AbstractChannel.java
index 5642f226554..e270a98acb7 100644
--- a/transport/src/main/java/io/netty/channel/AbstractChannel.java
+++ b/transport/src/main/java/io/netty/channel/AbstractChannel.java
@@ -83,7 +83,7 @@ protected AbstractChannel(Channel parent) {
@Override
public boolean isWritable() {
ChannelOutboundBuffer buf = unsafe.outboundBuffer();
- return buf != null && buf.getWritable();
+ return buf != null && buf.isWritable();
}
@Override
@@ -649,7 +649,11 @@ public void write(Object msg, ChannelPromise promise) {
ReferenceCountUtil.release(msg);
return;
}
- outboundBuffer.addMessage(msg, promise);
+ int size = estimatorHandle().size(msg);
+ if (size < 0) {
+ size = 0;
+ }
+ outboundBuffer.addMessage(msg, size, promise);
}
@Override
diff --git a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java
index 899ca3b289c..e0508bdb5d7 100644
--- a/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java
+++ b/transport/src/main/java/io/netty/channel/ChannelOutboundBuffer.java
@@ -40,6 +40,8 @@
/**
* (Transport implementors only) an internal data structure used by {@link AbstractChannel} to store its pending
* outbound write requests.
+ *
+ * All the methods should only be called by the {@link EventLoop} of the {@link Channel}.
*/
public final class ChannelOutboundBuffer {
@@ -61,7 +63,7 @@ protected ByteBuffer[] initialValue() throws Exception {
}
};
- private final AbstractChannel channel;
+ private final Channel channel;
// Entry(flushedEntry) --> ... Entry(unflushedEntry) --> ... Entry(tailEntry)
//
@@ -109,12 +111,11 @@ protected ByteBuffer[] initialValue() throws Exception {
this.channel = channel;
}
- void addMessage(Object msg, ChannelPromise promise) {
- int size = channel.estimatorHandle().size(msg);
- if (size < 0) {
- size = 0;
- }
-
+ /**
+ * Add given message to this {@link ChannelOutboundBuffer}. The given {@link ChannelPromise} will be notified once
+ * the message was written.
+ */
+ public void addMessage(Object msg, int size, ChannelPromise promise) {
Entry entry = Entry.newInstance(msg, size, total(msg), promise);
if (tailEntry == null) {
flushedEntry = null;
@@ -133,7 +134,11 @@ void addMessage(Object msg, ChannelPromise promise) {
incrementPendingOutboundBytes(size);
}
- void addFlush() {
+ /**
+ * Add a flush to this {@link ChannelOutboundBuffer}. This means all previous added messages are marked as flushed
+ * and so you will be able to handle them.
+ */
+ public void addFlush() {
// There is no need to process all entries if there was already a flush before and no new messages
// where added in the meantime.
//
@@ -206,10 +211,18 @@ private static long total(Object msg) {
return -1;
}
+ /**
+ * Return the current message to write or {@code null} if nothing was flushed before and so is ready to be written.
+ */
public Object current() {
return current(true);
}
+ /**
+ * Return the current message to write or {@code null} if nothing was flushed before and so is ready to be written.
+ * If {@code true} is specified a direct {@link ByteBuf} or {@link ByteBufHolder} is prefered and
+ * so the current message may be copied into a direct buffer.
+ */
public Object current(boolean preferDirect) {
// TODO: Think of a smart way to handle ByteBufHolder messages
Entry entry = flushedEntry;
@@ -250,7 +263,7 @@ public Object current(boolean preferDirect) {
/**
* Replace the current msg with the given one.
- * The replaced msg will automatically be released
+ * {@link ReferenceCountUtil#release(Object)} will automatically be called on the replaced message.
*/
public void current(Object msg) {
Entry entry = flushedEntry;
@@ -259,6 +272,9 @@ public void current(Object msg) {
entry.msg = msg;
}
+ /**
+ * Notify the {@link ChannelPromise} of the current message about writing progress.
+ */
public void progress(long amount) {
Entry e = flushedEntry;
assert e != null;
@@ -270,6 +286,11 @@ public void progress(long amount) {
}
}
+ /**
+ * Will remove the current message, mark its {@link ChannelPromise} as success and return {@code true}. If no
+ * flushed message exists at the time this method is called it will return {@code false} to signal that no more
+ * messages are ready to be handled.
+ */
public boolean remove() {
Entry e = flushedEntry;
if (e == null) {
@@ -295,6 +316,11 @@ public boolean remove() {
return true;
}
+ /**
+ * Will remove the current message, mark its {@link ChannelPromise} as failure using the given {@link Throwable}
+ * and return {@code true}. If no flushed message exists at the time this method is called it will return
+ * {@code false} to signal that no more messages are ready to be handled.
+ */
public boolean remove(Throwable cause) {
Entry e = flushedEntry;
if (e == null) {
@@ -366,9 +392,8 @@ public void removeBytes(long writtenBytes) {
/**
* Returns an array of direct NIO buffers if the currently pending messages are made of {@link ByteBuf} only.
- * {@code null} is returned otherwise. If this method returns a non-null array, {@link #nioBufferCount()} and
- * {@link #nioBufferSize()} will return the number of NIO buffers in the returned array and the total number
- * of readable bytes of the NIO buffers respectively.
+ * {@link #nioBufferCount()} and {@link #nioBufferSize()} will return the number of NIO buffers in the returned
+ * array and the total number of readable bytes of the NIO buffers respectively.
* <p>
* Note that the returned array is reused and thus should not escape
* {@link AbstractChannel#doWrite(ChannelOutboundBuffer)}.
@@ -479,22 +504,39 @@ private static ByteBuffer[] expandNioBufferArray(ByteBuffer[] array, int neededS
return newArray;
}
+ /**
+ * Returns the number of {@link ByteBuffer} that can be written out of the {@link ByteBuffer} array that was
+ * obtained via {@link #nioBuffers()}. This method <strong>MUST</strong> be called after {@link #nioBuffers()}
+ * was called.
+ */
public int nioBufferCount() {
return nioBufferCount;
}
+ /**
+ * Returns the number of bytes that can be written out of the {@link ByteBuffer} array that was
+ * obtained via {@link #nioBuffers()}. This method <strong>MUST</strong> be called after {@link #nioBuffers()}
+ * was called.
+ */
public long nioBufferSize() {
return nioBufferSize;
}
- boolean getWritable() {
+ boolean isWritable() {
return writable != 0;
}
+ /**
+ * Returns the number of flushed messages in this {@link ChannelOutboundBuffer}.
+ */
public int size() {
return flushed;
}
+ /**
+ * Returns {@code true} if there are flushed messages in this {@link ChannelOutboundBuffer} or {@code false}
+ * otherwise.
+ */
public boolean isEmpty() {
return flushed == 0;
}
| diff --git a/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java b/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java
index 11c0f6a1415..296a0e7d131 100644
--- a/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java
+++ b/transport/src/test/java/io/netty/channel/ChannelOutboundBufferTest.java
@@ -51,7 +51,7 @@ public void testNioBuffersSingleBacked() {
ByteBuf buf = copiedBuffer("buf1", CharsetUtil.US_ASCII);
ByteBuffer nioBuf = buf.internalNioBuffer(0, buf.readableBytes());
- buffer.addMessage(buf, channel.voidPromise());
+ buffer.addMessage(buf, buf.readableBytes(), channel.voidPromise());
assertEquals("Should still be 0 as not flushed yet", 0, buffer.nioBufferCount());
buffer.addFlush();
ByteBuffer[] buffers = buffer.nioBuffers();
@@ -75,7 +75,7 @@ public void testNioBuffersExpand() {
ByteBuf buf = directBuffer().writeBytes("buf1".getBytes(CharsetUtil.US_ASCII));
for (int i = 0; i < 64; i++) {
- buffer.addMessage(buf.copy(), channel.voidPromise());
+ buffer.addMessage(buf.copy(), buf.readableBytes(), channel.voidPromise());
}
assertEquals("Should still be 0 as not flushed yet", 0, buffer.nioBufferCount());
buffer.addFlush();
@@ -99,7 +99,7 @@ public void testNioBuffersExpand2() {
for (int i = 0; i < 65; i++) {
comp.addComponent(buf.copy()).writerIndex(comp.writerIndex() + buf.readableBytes());
}
- buffer.addMessage(comp, channel.voidPromise());
+ buffer.addMessage(comp, comp.readableBytes(), channel.voidPromise());
assertEquals("Should still be 0 as not flushed yet", 0, buffer.nioBufferCount());
buffer.addFlush();
| val | train | 2014-08-04T19:53:39 | 2014-07-28T17:49:22Z | normanmaurer | val |
netty/netty/2649_2745 | netty/netty | netty/netty/2649 | netty/netty/2745 | [
"timestamp(timedelta=39985.0, similarity=0.9999999999999999)"
] | d5fd57262b9877a0f5b035a7c019d576ee1c60f3 | a6e3e455f410a35e217c982d0c11d2d13332d4b2 | [
"Do we also need something like getIntAndRemove(...) ?\n",
"@plucury thanks for all the work. This can be closed now. \n"
] | [
"maybe start with 2 ? Seems more likely then 4.\n",
"why is this even needed ?\n",
"Again I think we should use 2\n",
"Same as above\n",
"I don't think an exception is the right thing here. I would even say that we should better change the return type to Long everywhere and return null in the case that it i... | 2014-08-06T09:15:56Z | [
"feature"
] | Add TextHeaders.getAndRemove(...) | At the moment there is not way to get and remove a header with one call. This means you need to search the headers two times. We should add getAndRemove(...) to allow doing so with one call.
| [
"codec/src/main/java/io/netty/handler/codec/DefaultTextHeaders.java",
"codec/src/main/java/io/netty/handler/codec/EmptyTextHeaders.java",
"codec/src/main/java/io/netty/handler/codec/TextHeaders.java"
] | [
"codec/src/main/java/io/netty/handler/codec/DefaultTextHeaders.java",
"codec/src/main/java/io/netty/handler/codec/EmptyTextHeaders.java",
"codec/src/main/java/io/netty/handler/codec/TextHeaders.java"
] | [] | diff --git a/codec/src/main/java/io/netty/handler/codec/DefaultTextHeaders.java b/codec/src/main/java/io/netty/handler/codec/DefaultTextHeaders.java
index 9d36a957a01..a6d3504d777 100644
--- a/codec/src/main/java/io/netty/handler/codec/DefaultTextHeaders.java
+++ b/codec/src/main/java/io/netty/handler/codec/DefaultTextHeaders.java
@@ -459,6 +459,158 @@ public long getTimeMillis(CharSequence name, long defaultValue) {
return HttpHeaderDateFormat.get().parse(v.toString(), defaultValue);
}
+ @Override
+ public CharSequence getUnconvertedAndRemove(CharSequence name) {
+ if (name == null) {
+ throw new NullPointerException("name");
+ }
+ int h = hashCode(name);
+ int i = index(h);
+ HeaderEntry e = entries[i];
+ if (e == null) {
+ return null;
+ }
+
+ CharSequence value = null;
+ for (;;) {
+ if (e.hash == h && nameEquals(e.name, name)) {
+ value = e.value;
+ e.remove();
+ HeaderEntry next = e.next;
+ if (next != null) {
+ entries[i] = next;
+ e = next;
+ } else {
+ entries[i] = null;
+ return value;
+ }
+ } else {
+ break;
+ }
+ }
+
+ for (;;) {
+ HeaderEntry next = e.next;
+ if (next == null) {
+ break;
+ }
+ if (next.hash == h && nameEquals(next.name, name)) {
+ value = next.value;
+ e.next = next.next;
+ next.remove();
+ } else {
+ e = next;
+ }
+ }
+
+ if (value != null) {
+ return value;
+ }
+ return null;
+ }
+
+ @Override
+ public String getAndRemove(CharSequence name) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ return null;
+ }
+ return v.toString();
+ }
+
+ @Override
+ public String getAndRemove(CharSequence name, String defaultValue) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ return defaultValue;
+ }
+ return v.toString();
+ }
+
+ @Override
+ public int getIntAndRemove(CharSequence name) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ throw new NoSuchElementException(String.valueOf(name));
+ }
+
+ if (v instanceof AsciiString) {
+ return ((AsciiString) v).parseInt();
+ } else {
+ return Integer.parseInt(v.toString());
+ }
+ }
+
+ @Override
+ public int getIntAndRemove(CharSequence name, int defaultValue) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ return defaultValue;
+ }
+
+ try {
+ if (v instanceof AsciiString) {
+ return ((AsciiString) v).parseInt();
+ } else {
+ return Integer.parseInt(v.toString());
+ }
+ } catch (NumberFormatException ignored) {
+ return defaultValue;
+ }
+ }
+
+ @Override
+ public long getLongAndRemove(CharSequence name) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ throw new NoSuchElementException(String.valueOf(name));
+ }
+
+ if (v instanceof AsciiString) {
+ return ((AsciiString) v).parseLong();
+ } else {
+ return Long.parseLong(v.toString());
+ }
+ }
+
+ @Override
+ public long getLongAndRemove(CharSequence name, long defaultValue) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ return defaultValue;
+ }
+
+ try {
+ if (v instanceof AsciiString) {
+ return ((AsciiString) v).parseLong();
+ } else {
+ return Long.parseLong(v.toString());
+ }
+ } catch (NumberFormatException ignored) {
+ return defaultValue;
+ }
+ }
+
+ @Override
+ public long getTimeMillisAndRemove(CharSequence name) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ throw new NoSuchElementException(String.valueOf(name));
+ }
+
+ return HttpHeaderDateFormat.get().parse(v.toString());
+ }
+
+ @Override
+ public long getTimeMillisAndRemove(CharSequence name, long defaultValue) {
+ CharSequence v = getUnconvertedAndRemove(name);
+ if (v == null) {
+ return defaultValue;
+ }
+
+ return HttpHeaderDateFormat.get().parse(v.toString(), defaultValue);
+ }
+
@Override
public List<CharSequence> getAllUnconverted(CharSequence name) {
if (name == null) {
@@ -501,6 +653,104 @@ public List<String> getAll(CharSequence name) {
return values;
}
+ @Override
+ public List<String> getAllAndRemove(CharSequence name) {
+ if (name == null) {
+ throw new NullPointerException("name");
+ }
+ int h = hashCode(name);
+ int i = index(h);
+ HeaderEntry e = entries[i];
+ if (e == null) {
+ return null;
+ }
+
+ List<String> values = new ArrayList<String>(4);
+ for (;;) {
+ if (e.hash == h && nameEquals(e.name, name)) {
+ values.add(e.getValue().toString());
+ e.remove();
+ HeaderEntry next = e.next;
+ if (next != null) {
+ entries[i] = next;
+ e = next;
+ } else {
+ entries[i] = null;
+ Collections.reverse(values);
+ return values;
+ }
+ } else {
+ break;
+ }
+ }
+
+ for (;;) {
+ HeaderEntry next = e.next;
+ if (next == null) {
+ break;
+ }
+ if (next.hash == h && nameEquals(next.name, name)) {
+ values.add(next.getValue().toString());
+ e.next = next.next;
+ next.remove();
+ } else {
+ e = next;
+ }
+ }
+
+ Collections.reverse(values);
+ return values;
+ }
+
+ @Override
+ public List<CharSequence> getAllUnconvertedAndRemove(CharSequence name) {
+ if (name == null) {
+ throw new NullPointerException("name");
+ }
+ int h = hashCode(name);
+ int i = index(h);
+ HeaderEntry e = entries[i];
+ if (e == null) {
+ return null;
+ }
+
+ List<CharSequence> values = new ArrayList<CharSequence>(4);
+ for (;;) {
+ if (e.hash == h && nameEquals(e.name, name)) {
+ values.add(e.getValue());
+ e.remove();
+ HeaderEntry next = e.next;
+ if (next != null) {
+ entries[i] = next;
+ e = next;
+ } else {
+ entries[i] = null;
+ Collections.reverse(values);
+ return values;
+ }
+ } else {
+ break;
+ }
+ }
+
+ for (;;) {
+ HeaderEntry next = e.next;
+ if (next == null) {
+ break;
+ }
+ if (next.hash == h && nameEquals(next.name, name)) {
+ values.add(next.getValue());
+ e.next = next.next;
+ next.remove();
+ } else {
+ e = next;
+ }
+ }
+
+ Collections.reverse(values);
+ return values;
+ }
+
@Override
public List<Map.Entry<String, String>> entries() {
int cnt = 0;
diff --git a/codec/src/main/java/io/netty/handler/codec/EmptyTextHeaders.java b/codec/src/main/java/io/netty/handler/codec/EmptyTextHeaders.java
index e2e5ebf8fd1..b0249083d03 100644
--- a/codec/src/main/java/io/netty/handler/codec/EmptyTextHeaders.java
+++ b/codec/src/main/java/io/netty/handler/codec/EmptyTextHeaders.java
@@ -67,11 +67,56 @@ public long getTimeMillis(CharSequence name, long defaultValue) {
return defaultValue;
}
+ @Override
+ public String getAndRemove(CharSequence name) {
+ return null;
+ }
+
+ @Override
+ public String getAndRemove(CharSequence name, String defaultValue) {
+ return defaultValue;
+ }
+
+ @Override
+ public int getIntAndRemove(CharSequence name) {
+ throw new NoSuchElementException(String.valueOf(name));
+ }
+
+ @Override
+ public int getIntAndRemove(CharSequence name, int defaultValue) {
+ return defaultValue;
+ }
+
+ @Override
+ public long getLongAndRemove(CharSequence name) {
+ throw new NoSuchElementException(String.valueOf(name));
+ }
+
+ @Override
+ public long getLongAndRemove(CharSequence name, long defaultValue) {
+ return defaultValue;
+ }
+
+ @Override
+ public long getTimeMillisAndRemove(CharSequence name) {
+ throw new NoSuchElementException(String.valueOf(name));
+ }
+
+ @Override
+ public long getTimeMillisAndRemove(CharSequence name, long defaultValue) {
+ return defaultValue;
+ }
+
@Override
public CharSequence getUnconverted(CharSequence name) {
return null;
}
+ @Override
+ public CharSequence getUnconvertedAndRemove(CharSequence name) {
+ return null;
+ }
+
@Override
public List<String> getAll(CharSequence name) {
return Collections.emptyList();
@@ -82,6 +127,16 @@ public List<CharSequence> getAllUnconverted(CharSequence name) {
return Collections.emptyList();
}
+ @Override
+ public List<String> getAllAndRemove(CharSequence name) {
+ return Collections.emptyList();
+ }
+
+ @Override
+ public List<CharSequence> getAllUnconvertedAndRemove(CharSequence name) {
+ return Collections.emptyList();
+ }
+
@Override
public List<Entry<String, String>> entries() {
return Collections.emptyList();
diff --git a/codec/src/main/java/io/netty/handler/codec/TextHeaders.java b/codec/src/main/java/io/netty/handler/codec/TextHeaders.java
index d84b2501820..97d9740f5e8 100644
--- a/codec/src/main/java/io/netty/handler/codec/TextHeaders.java
+++ b/codec/src/main/java/io/netty/handler/codec/TextHeaders.java
@@ -48,6 +48,94 @@ public interface TextHeaders extends Iterable<Map.Entry<String, String>> {
long getTimeMillis(CharSequence name);
long getTimeMillis(CharSequence name, long defaultValue);
+ /**
+ * Returns and Removes the value of a header with the specified name. If there are
+ * more than one values for the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @return The first header value or {@code null} if there is no such header
+ */
+ String getAndRemove(CharSequence name);
+
+ /**
+ * Returns and Removes the value of a header with the specified name. If there are
+ * more than one values for the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @param defaultValue default value
+ * @return The first header value or {@code defaultValue} if there is no such header
+ */
+ String getAndRemove(CharSequence name, String defaultValue);
+
+ /**
+ * Returns and Removes the integer value of a header with the specified name. If there are
+ * more than one values for the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @return The first header value
+ * @throws java.util.NoSuchElementException
+ * if no such header
+ * @throws NumberFormatException
+ * if the value of header is not number
+ */
+ int getIntAndRemove(CharSequence name);
+
+ /**
+ * Returns and Removes the integer value of a header with the specified name. If there are more than one values for
+ * the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @param defaultValue default value
+ * @return The first header value or {@code defaultValue} if there is no such header or the value of header is not
+ * number
+ */
+ int getIntAndRemove(CharSequence name, int defaultValue);
+
+ /**
+ * Returns and Removes the long value of a header with the specified name. If there are
+ * more than one values for the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @return The first header value
+ * @throws java.util.NoSuchElementException
+ * if no such header
+ * @throws NumberFormatException
+ * if the value of header is not number
+ */
+ long getLongAndRemove(CharSequence name);
+
+ /**
+ * Returns and Removes the long value of a header with the specified name. If there are more than one values for
+ * the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @param defaultValue default value
+ * @return The first header value or {@code defaultValue} if there is no such header or the value of header is not
+ * number
+ */
+ long getLongAndRemove(CharSequence name, long defaultValue);
+
+ /**
+ * Returns and Removes the millisecond value of a header with the specified name. If there are
+ * more than one values for the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @return The first header value
+ * @throws java.util.NoSuchElementException
+ * if no such header
+ */
+ long getTimeMillisAndRemove(CharSequence name);
+
+ /**
+ * Returns and Removes the millisecond value of a header with the specified name. If there are more than one values
+ * for the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @param defaultValue default value
+ * @return The first header value or {@code defaultValue} if there is no such header
+ */
+ long getTimeMillisAndRemove(CharSequence name, long defaultValue);
+
/**
* Returns the value of a header with the specified name. If there are
* more than one values for the specified name, the first value is returned.
@@ -57,6 +145,15 @@ public interface TextHeaders extends Iterable<Map.Entry<String, String>> {
*/
CharSequence getUnconverted(CharSequence name);
+ /**
+ * Returns and Removes the value of a header with the specified name. If there are
+ * more than one values for the specified name, the first value is returned.
+ *
+ * @param name The name of the header to search
+ * @return The first header value or {@code null} if there is no such header
+ */
+ CharSequence getUnconvertedAndRemove(CharSequence name);
+
/**
* Returns the values of headers with the specified name
*
@@ -73,6 +170,22 @@ public interface TextHeaders extends Iterable<Map.Entry<String, String>> {
*/
List<CharSequence> getAllUnconverted(CharSequence name);
+ /**
+ * Returns and Removes the values of headers with the specified name
+ *
+ * @param name The name of the headers to search
+ * @return A {@link List} of header values which will be empty if no values are found
+ */
+ List<String> getAllAndRemove(CharSequence name);
+
+ /**
+ * Returns and Removes the values of headers with the specified name
+ *
+ * @param name The name of the headers to search
+ * @return A {@link List} of header values which will be empty if no values are found
+ */
+ List<CharSequence> getAllUnconvertedAndRemove(CharSequence name);
+
/**
* Returns a new {@link List} that contains all headers in this object. Note that modifying the
* returned {@link List} will not affect the state of this object. If you intend to enumerate over the header
| null | train | train | 2014-08-06T07:03:31 | 2014-07-11T05:35:52Z | normanmaurer | val |
netty/netty/2750_2755 | netty/netty | netty/netty/2750 | netty/netty/2755 | [
"timestamp(timedelta=96494.0, similarity=0.8982404311961517)"
] | 220660e351b2a22112b19c4af45e403eab1f73ab | ac36de3b1c7c8ba28bb269685adf8b299a59769e | [
"Do `LZF` and `FastLZ` employ different compression algorithms? Also, could you explain why 6PACK is not suitable for Netty? For example, we could use an arbitrary file name like `data`?\n",
"I guess I know why:\n- too much overhead\n- 6pack is only provided as an example to show how to use FastLZ.\n\nHere's my f... | [] | 2014-08-11T23:14:17Z | [
"feature",
"won't fix"
] | FastLZ format | I'm trying to implement FastLZ compression codec for Netty.
Official web site: http://fastlz.org/
Official repo: https://code.google.com/p/fastlz/
Java port: https://code.google.com/p/jfastlz/
These resources doesn't have a format description so I read a source code.
FastLZ compressor (fastlz.c / JFastLZ.java) only compresses the data. So at the end of compression you will have a compressed array without any additional information (chunk length / original length / etc.). All additional information for decompression will be added by 6PACK (6pach.c / JFastLZPack.java). 6PACK is only a file compressor. And if you compress a file with 6PACK it will have this format:
| Length (bytes) | What |
| :-: | :-: |
| 8 | Stream magic |
| | |
| _2_ | _Chunk ID (1 - chunk of metadata)_ |
| _2_ | _Options (0 - not compressed)_ |
| _4_ | _Chunk length (length of file name + 11)_ |
| _4_ | _Checksum_ |
| _4_ | _0 (zeros)_ |
| 4 | File length |
| 4 | 0 (zeros) |
| 2 | Length of file name |
| Length of file name + 1 | File name |
| | |
| _2_ | _Chunk ID (17 - chunk of data)_ |
| _2_ | _Options (1 - compressed / 0 - not compressed, if chunkLength < 32 bytes)_ |
| _4_ | _Chunk length_ |
| _4_ | _Checksum_ |
| _4_ | _Original length_ |
| Chunk length | Data (compressed or not) |
| | |
| | ... |
**Short description:**
Stream magic + Chunk of metadata + N \* Chunk of data
Each chunk contains header (16 bytes, see italic rows) + data (0+ bytes) / metadata (length of file name + 11 bytes)
This is an unsuitable format for Netty. Also FastLZ is not a very popular compression algorithm and it doesn't have a good Java library to create Java `Input/OutputStream`s (jfastlz can compress only files from your file system and uses command line for this). So I think that we may create our custom header format and use only compressor and decompressor from jfastlz. It may be something like [LZF format](https://github.com/ning/compress/wiki/LZFFormat):
```
0-2: Magic 'FLZ' (like extension of 6PACK's compressed files)
3: Type
0x00: Non-compressed chunk
0x01: Compressed chunk
4-7: 'Checksum' (uint32)
8-9: 'ChunkLength' (uint16)
```
In addition, compressed chunks (type 0x01) will contain additional 2 header bytes:
```
10-11: 'OriginalLength' (uint16)
```
And it will be more convenient to use Big Endian notation (6PACK uses Little Endian notation for chunk headers).
@trustin @normanmaurer WDYT?
| [
"NOTICE.txt"
] | [
"NOTICE.txt",
"codec/src/main/java/io/netty/handler/codec/compression/FastLz.java",
"codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedDecoder.java",
"codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedEncoder.java",
"license/LICENSE.jfastlz.txt"
] | [
"codec/src/test/java/io/netty/handler/codec/compression/FastLzIntegrationTest.java"
] | diff --git a/NOTICE.txt b/NOTICE.txt
index 74637c84a77..356556498d0 100644
--- a/NOTICE.txt
+++ b/NOTICE.txt
@@ -114,6 +114,14 @@ decoding data in LZF format, written by Tatu Saloranta. It can be obtained at:
* HOMEPAGE:
* https://github.com/ning/compress
+This product contains a modified portion of 'jfastlz', a Java port of FastLZ compression
+and decompression library written by William Kinney. It can be obtained at:
+
+ * LICENSE:
+ * license/LICENSE.jfastlz.txt (MIT License)
+ * HOMEPAGE:
+ * https://code.google.com/p/jfastlz/
+
This product optionally depends on 'Protocol Buffers', Google's data
interchange format, which can be obtained at:
diff --git a/codec/src/main/java/io/netty/handler/codec/compression/FastLz.java b/codec/src/main/java/io/netty/handler/codec/compression/FastLz.java
new file mode 100644
index 00000000000..754ddfb0d9c
--- /dev/null
+++ b/codec/src/main/java/io/netty/handler/codec/compression/FastLz.java
@@ -0,0 +1,575 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.compression;
+
+/**
+ * Core of FastLZ compression algorithm.
+ *
+ * This class provides methods for compression and decompression of buffers and saves
+ * constants which use by {@link FastLzFramedEncoder} and {@link FastLzFramedDecoder}.
+ *
+ * This is refactored code of <a href="https://code.google.com/p/jfastlz/">jfastlz</a>
+ * library written by William Kinney.
+ */
+final class FastLz {
+
+ private static final int MAX_DISTANCE = 8191;
+ private static final int MAX_FARDISTANCE = 65535 + MAX_DISTANCE - 1;
+
+ private static final int HASH_LOG = 13;
+ private static final int HASH_SIZE = 1 << HASH_LOG; // 8192
+ private static final int HASH_MASK = HASH_SIZE - 1;
+
+ private static final int MAX_COPY = 32;
+ private static final int MAX_LEN = 256 + 8;
+
+ private static final int MIN_RECOMENDED_LENGTH_FOR_LEVEL_2 = 1024 * 64;
+
+ static final int MAGIC_NUMBER = 'F' << 16 | 'L' << 8 | 'Z';
+
+ static final byte BLOCK_TYPE_NON_COMPRESSED = 0x00;
+ static final byte BLOCK_TYPE_COMPRESSED = 0x01;
+ static final byte BLOCK_WITHOUT_CHECKSUM = 0x00;
+ static final byte BLOCK_WITH_CHECKSUM = 0x10;
+
+ static final int OPTIONS_OFFSET = 3;
+ static final int CHECKSUM_OFFSET = 4;
+
+ static final int MAX_CHUNK_LENGTH = 0xFFFF;
+
+ /**
+ * Do not call {@link #compress(byte[], int, int, byte[], int, int)} for input buffers
+ * which length less than this value.
+ */
+ static final int MIN_LENGTH_TO_COMPRESSION = 32;
+
+ /**
+ * In this case {@link #compress(byte[], int, int, byte[], int, int)} will choose level
+ * automatically depending on the length of the input buffer. If length less than
+ * {@link #MIN_RECOMENDED_LENGTH_FOR_LEVEL_2} {@link #LEVEL_1} will be choosen,
+ * otherwise {@link #LEVEL_2}.
+ */
+ static final int LEVEL_AUTO = 0;
+
+ /**
+ * Level 1 is the fastest compression and generally useful for short data.
+ */
+ static final int LEVEL_1 = 1;
+
+ /**
+ * Level 2 is slightly slower but it gives better compression ratio.
+ */
+ static final int LEVEL_2 = 2;
+
+ /**
+ * The output buffer must be at least 6% larger than the input buffer and can not be smaller than 66 bytes.
+ * @param inputLength length of input buffer
+ * @return Maximum output buffer length
+ */
+ static int calculateOutputBufferLength(int inputLength) {
+ final int outputLength = (int) (inputLength * 1.06);
+ return Math.max(outputLength, 66);
+ }
+
+ /**
+ * Compress a block of data in the input buffer and returns the size of compressed block.
+ * The size of input buffer is specified by length. The minimum input buffer size is 32.
+ *
+ * If the input is not compressible, the return value might be larger than length (input buffer size).
+ */
+ static int compress(final byte[] input, final int inOffset, final int inLength,
+ final byte[] output, final int outOffset, final int proposedLevel) {
+ final int level;
+ if (proposedLevel == LEVEL_AUTO) {
+ level = inLength < MIN_RECOMENDED_LENGTH_FOR_LEVEL_2 ? LEVEL_1 : LEVEL_2;
+ } else {
+ level = proposedLevel;
+ }
+
+ int ip = 0;
+ int ipBound = ip + inLength - 2;
+ int ipLimit = ip + inLength - 12;
+
+ int op = 0;
+
+ // const flzuint8* htab[HASH_SIZE];
+ int[] htab = new int[HASH_SIZE];
+ // const flzuint8** hslot;
+ int hslot;
+ // flzuint32 hval;
+ // int OK b/c address starting from 0
+ int hval;
+ // flzuint32 copy;
+ // int OK b/c address starting from 0
+ int copy;
+
+ /* sanity check */
+ if (inLength < 4) {
+ if (inLength != 0) {
+ // *op++ = length-1;
+ output[outOffset + op++] = (byte) (inLength - 1);
+ ipBound++;
+ while (ip <= ipBound) {
+ output[outOffset + op++] = input[inOffset + ip++];
+ }
+ return inLength + 1;
+ }
+ // else
+ return 0;
+ }
+
+ /* initializes hash table */
+ // for (hslot = htab; hslot < htab + HASH_SIZE; hslot++)
+ for (hslot = 0; hslot < HASH_SIZE; hslot++) {
+ //*hslot = ip;
+ htab[hslot] = ip;
+ }
+
+ /* we start with literal copy */
+ copy = 2;
+ output[outOffset + op++] = MAX_COPY - 1;
+ output[outOffset + op++] = input[inOffset + ip++];
+ output[outOffset + op++] = input[inOffset + ip++];
+
+ /* main loop */
+ while (ip < ipLimit) {
+ int ref = 0;
+
+ long distance = 0;
+
+ /* minimum match length */
+ // flzuint32 len = 3;
+ // int OK b/c len is 0 and octal based
+ int len = 3;
+
+ /* comparison starting-point */
+ int anchor = ip;
+
+ boolean matchLabel = false;
+
+ /* check for a run */
+ if (level == LEVEL_2) {
+ //if(ip[0] == ip[-1] && FASTLZ_READU16(ip-1)==FASTLZ_READU16(ip+1))
+ if (input[inOffset + ip] == input[inOffset + ip - 1] &&
+ readU16(input, inOffset + ip - 1) == readU16(input, inOffset + ip + 1)) {
+ distance = 1;
+ ip += 3;
+ ref = anchor - 1 + 3;
+
+ /*
+ * goto match;
+ */
+ matchLabel = true;
+ }
+ }
+ if (!matchLabel) {
+ /* find potential match */
+ // HASH_FUNCTION(hval,ip);
+ hval = hashFunction(input, inOffset + ip);
+ // hslot = htab + hval;
+ hslot = hval;
+ // ref = htab[hval];
+ ref = htab[hval];
+
+ /* calculate distance to the match */
+ distance = anchor - ref;
+
+ /* update hash table */
+ //*hslot = anchor;
+ htab[hslot] = anchor;
+
+ /* is this a match? check the first 3 bytes */
+ if (distance == 0
+ || (level == LEVEL_1 ? distance >= MAX_DISTANCE : distance >= MAX_FARDISTANCE)
+ || input[inOffset + ref++] != input[inOffset + ip++]
+ || input[inOffset + ref++] != input[inOffset + ip++]
+ || input[inOffset + ref++] != input[inOffset + ip++]) {
+ /*
+ * goto literal;
+ */
+ output[outOffset + op++] = input[inOffset + anchor++];
+ ip = anchor;
+ copy++;
+ if (copy == MAX_COPY) {
+ copy = 0;
+ output[outOffset + op++] = MAX_COPY - 1;
+ }
+ continue;
+ }
+
+ if (level == LEVEL_2) {
+ /* far, needs at least 5-byte match */
+ if (distance >= MAX_DISTANCE) {
+ if (input[inOffset + ip++] != input[inOffset + ref++]
+ || input[inOffset + ip++] != input[inOffset + ref++]) {
+ /*
+ * goto literal;
+ */
+ output[outOffset + op++] = input[inOffset + anchor++];
+ ip = anchor;
+ copy++;
+ if (copy == MAX_COPY) {
+ copy = 0;
+ output[outOffset + op++] = MAX_COPY - 1;
+ }
+ continue;
+ }
+ len += 2;
+ }
+ }
+ } // end if(!matchLabel)
+ /*
+ * match:
+ */
+ /* last matched byte */
+ ip = anchor + len;
+
+ /* distance is biased */
+ distance--;
+
+ if (distance == 0) {
+ /* zero distance means a run */
+ //flzuint8 x = ip[-1];
+ byte x = input[inOffset + ip - 1];
+ while (ip < ipBound) {
+ if (input[inOffset + ref++] != x) {
+ break;
+ } else {
+ ip++;
+ }
+ }
+ } else {
+ for (;;) {
+ /* safe because the outer check against ip limit */
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ while (ip < ipBound) {
+ if (input[inOffset + ref++] != input[inOffset + ip++]) {
+ break;
+ }
+ }
+ break;
+ }
+ }
+
+ /* if we have copied something, adjust the copy count */
+ if (copy != 0) {
+ /* copy is biased, '0' means 1 byte copy */
+ // *(op-copy-1) = copy-1;
+ output[outOffset + op - copy - 1] = (byte) (copy - 1);
+ } else {
+ /* back, to overwrite the copy count */
+ op--;
+ }
+
+ /* reset literal counter */
+ copy = 0;
+
+ /* length is biased, '1' means a match of 3 bytes */
+ ip -= 3;
+ len = ip - anchor;
+
+ /* encode the match */
+ if (level == LEVEL_2) {
+ if (distance < MAX_DISTANCE) {
+ if (len < 7) {
+ output[outOffset + op++] = (byte) ((len << 5) + (distance >>> 8));
+ output[outOffset + op++] = (byte) (distance & 255);
+ } else {
+ output[outOffset + op++] = (byte) ((7 << 5) + (distance >>> 8));
+ for (len -= 7; len >= 255; len -= 255) {
+ output[outOffset + op++] = (byte) 255;
+ }
+ output[outOffset + op++] = (byte) len;
+ output[outOffset + op++] = (byte) (distance & 255);
+ }
+ } else {
+ /* far away, but not yet in the another galaxy... */
+ if (len < 7) {
+ distance -= MAX_DISTANCE;
+ output[outOffset + op++] = (byte) ((len << 5) + 31);
+ output[outOffset + op++] = (byte) 255;
+ output[outOffset + op++] = (byte) (distance >>> 8);
+ output[outOffset + op++] = (byte) (distance & 255);
+ } else {
+ distance -= MAX_DISTANCE;
+ output[outOffset + op++] = (byte) ((7 << 5) + 31);
+ for (len -= 7; len >= 255; len -= 255) {
+ output[outOffset + op++] = (byte) 255;
+ }
+ output[outOffset + op++] = (byte) len;
+ output[outOffset + op++] = (byte) 255;
+ output[outOffset + op++] = (byte) (distance >>> 8);
+ output[outOffset + op++] = (byte) (distance & 255);
+ }
+ }
+ } else {
+ if (len > MAX_LEN - 2) {
+ while (len > MAX_LEN - 2) {
+ output[outOffset + op++] = (byte) ((7 << 5) + (distance >>> 8));
+ output[outOffset + op++] = (byte) (MAX_LEN - 2 - 7 - 2);
+ output[outOffset + op++] = (byte) (distance & 255);
+ len -= MAX_LEN - 2;
+ }
+ }
+
+ if (len < 7) {
+ output[outOffset + op++] = (byte) ((len << 5) + (distance >>> 8));
+ output[outOffset + op++] = (byte) (distance & 255);
+ } else {
+ output[outOffset + op++] = (byte) ((7 << 5) + (distance >>> 8));
+ output[outOffset + op++] = (byte) (len - 7);
+ output[outOffset + op++] = (byte) (distance & 255);
+ }
+ }
+
+ /* update the hash at match boundary */
+ //HASH_FUNCTION(hval,ip);
+ hval = hashFunction(input, inOffset + ip);
+ htab[hval] = ip++;
+
+ //HASH_FUNCTION(hval,ip);
+ hval = hashFunction(input, inOffset + ip);
+ htab[hval] = ip++;
+
+ /* assuming literal copy */
+ output[outOffset + op++] = MAX_COPY - 1;
+
+ continue;
+
+ // Moved to be inline, with a 'continue'
+ /*
+ * literal:
+ *
+ output[outOffset + op++] = input[inOffset + anchor++];
+ ip = anchor;
+ copy++;
+ if(copy == MAX_COPY){
+ copy = 0;
+ output[outOffset + op++] = MAX_COPY-1;
+ }
+ */
+ }
+
+ /* left-over as literal copy */
+ ipBound++;
+ while (ip <= ipBound) {
+ output[outOffset + op++] = input[inOffset + ip++];
+ copy++;
+ if (copy == MAX_COPY) {
+ copy = 0;
+ output[outOffset + op++] = MAX_COPY - 1;
+ }
+ }
+
+ /* if we have copied something, adjust the copy length */
+ if (copy != 0) {
+ //*(op-copy-1) = copy-1;
+ output[outOffset + op - copy - 1] = (byte) (copy - 1);
+ } else {
+ op--;
+ }
+
+ if (level == LEVEL_2) {
+ /* marker for fastlz2 */
+ output[outOffset] |= 1 << 5;
+ }
+
+ return op;
+ }
+
+ /**
+ * Decompress a block of compressed data and returns the size of the decompressed block.
+ * If error occurs, e.g. the compressed data is corrupted or the output buffer is not large
+ * enough, then 0 (zero) will be returned instead.
+ *
+ * Decompression is memory safe and guaranteed not to write the output buffer
+ * more than what is specified in outLength.
+ */
+ static int decompress(final byte[] input, final int inOffset, final int inLength,
+ final byte[] output, final int outOffset, final int outLength) {
+ //int level = ((*(const flzuint8*)input) >> 5) + 1;
+ final int level = (input[inOffset] >> 5) + 1;
+ if (level != LEVEL_1 && level != LEVEL_2) {
+ throw new DecompressionException(String.format(
+ "invalid level: %d (expected: %d or %d)", level, LEVEL_1, LEVEL_2
+ ));
+ }
+
+ // const flzuint8* ip = (const flzuint8*) input;
+ int ip = 0;
+ // flzuint8* op = (flzuint8*) output;
+ int op = 0;
+ // flzuint32 ctrl = (*ip++) & 31;
+ long ctrl = input[inOffset + ip++] & 31;
+
+ int loop = 1;
+ do {
+ // const flzuint8* ref = op;
+ int ref = op;
+ // flzuint32 len = ctrl >> 5;
+ long len = ctrl >> 5;
+ // flzuint32 ofs = (ctrl & 31) << 8;
+ long ofs = (ctrl & 31) << 8;
+
+ if (ctrl >= 32) {
+ len--;
+ // ref -= ofs;
+ ref -= ofs;
+
+ int code;
+ if (len == 6) {
+ if (level == LEVEL_1) {
+ // len += *ip++;
+ len += input[inOffset + ip++] & 0xFF;
+ } else {
+ do {
+ code = input[inOffset + ip++] & 0xFF;
+ len += code;
+ } while (code == 255);
+ }
+ }
+ if (level == LEVEL_1) {
+ // ref -= *ip++;
+ ref -= input[inOffset + ip++] & 0xFF;
+ } else {
+ code = input[inOffset + ip++] & 0xFF;
+ ref -= code;
+
+ /* match from 16-bit distance */
+ // if(FASTLZ_UNEXPECT_CONDITIONAL(code==255))
+ // if(FASTLZ_EXPECT_CONDITIONAL(ofs==(31 << 8)))
+ if (code == 255 && ofs == 31 << 8) {
+ ofs = (input[inOffset + ip++] & 0xFF) << 8;
+ ofs += input[inOffset + ip++] & 0xFF;
+
+ ref = (int) (op - ofs - MAX_DISTANCE);
+ }
+ }
+
+ // if the output index + length of block(?) + 3(?) is over the output limit?
+ if (op + len + 3 > outLength) {
+ return 0;
+ }
+
+ // if (FASTLZ_UNEXPECT_CONDITIONAL(ref-1 < (flzuint8 *)output))
+ // if the address space of ref-1 is < the address of output?
+ // if we are still at the beginning of the output address?
+ if (ref - 1 < 0) {
+ return 0;
+ }
+
+ if (ip < inLength) {
+ ctrl = input[inOffset + ip++] & 0xFF;
+ } else {
+ loop = 0;
+ }
+
+ if (ref == op) {
+ /* optimize copy for a run */
+ // flzuint8 b = ref[-1];
+ byte b = output[outOffset + ref - 1];
+ output[outOffset + op++] = b;
+ output[outOffset + op++] = b;
+ output[outOffset + op++] = b;
+ while (len != 0) {
+ output[outOffset + op++] = b;
+ --len;
+ }
+ } else {
+ /* copy from reference */
+ ref--;
+
+ // *op++ = *ref++;
+ output[outOffset + op++] = output[outOffset + ref++];
+ output[outOffset + op++] = output[outOffset + ref++];
+ output[outOffset + op++] = output[outOffset + ref++];
+
+ while (len != 0) {
+ output[outOffset + op++] = output[outOffset + ref++];
+ --len;
+ }
+ }
+ } else {
+ ctrl++;
+
+ if (op + ctrl > outLength) {
+ return 0;
+ }
+ if (ip + ctrl > inLength) {
+ return 0;
+ }
+
+ //*op++ = *ip++;
+ output[outOffset + op++] = input[inOffset + ip++];
+
+ for (--ctrl; ctrl != 0; ctrl--) {
+ // *op++ = *ip++;
+ output[outOffset + op++] = input[inOffset + ip++];
+ }
+
+ loop = ip < inLength ? 1 : 0;
+ if (loop != 0) {
+ // ctrl = *ip++;
+ ctrl = input[inOffset + ip++] & 0xFF;
+ }
+ }
+
+ // while(FASTLZ_EXPECT_CONDITIONAL(loop));
+ } while (loop != 0);
+
+ // return op - (flzuint8*)output;
+ return op;
+ }
+
+ private static int hashFunction(byte[] p, int offset) {
+ int v = readU16(p, offset);
+ v ^= readU16(p, offset + 1) ^ v >> 16 - HASH_LOG;
+ v &= HASH_MASK;
+ return v;
+ }
+
+ private static int readU16(byte[] data, int offset) {
+ if (offset + 1 >= data.length) {
+ return data[offset] & 0xff;
+ }
+ return (data[offset + 1] & 0xff) << 8 | data[offset] & 0xff;
+ }
+
+ private FastLz() { }
+}
diff --git a/codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedDecoder.java b/codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedDecoder.java
new file mode 100644
index 00000000000..1b7379ce266
--- /dev/null
+++ b/codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedDecoder.java
@@ -0,0 +1,211 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.compression;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.ByteToMessageDecoder;
+
+import java.util.List;
+import java.util.zip.Adler32;
+import java.util.zip.Checksum;
+
+import static io.netty.handler.codec.compression.FastLz.*;
+
+/**
+ * Uncompresses a {@link ByteBuf} encoded with the Bzip2 format.
+ *
+ * See <a href="https://github.com/netty/netty/issues/2750">FastLZ format</a>.
+ */
+public class FastLzFramedDecoder extends ByteToMessageDecoder {
+ /**
+ * Current state of decompression.
+ */
+ private enum State {
+ INIT_BLOCK,
+ INIT_BLOCK_PARAMS,
+ DECOMPRESS_DATA,
+ CORRUPTED
+ }
+
+ private State currentState = State.INIT_BLOCK;
+
+ /**
+ * Underlying checksum calculator in use.
+ */
+ private final Checksum checksum;
+
+ /**
+ * Length of current received chunk of data.
+ */
+ private int chunkLength;
+
+ /**
+ * Original of current received chunk of data.
+ * It is equal to {@link #chunkLength} for non compressed chunks.
+ */
+ private int originalLength;
+
+ /**
+ * Indicates is this chunk compressed or not.
+ */
+ private boolean isCompressed;
+
+ /**
+ * Indicates is this chunk has checksum or not.
+ */
+ private boolean hasChecksum;
+
+ /**
+ * Chechsum value of current received chunk of data which has checksum.
+ */
+ private int currentChecksum;
+
+ /**
+ * Creates the fastest FastLZ decoder without checksum calculation.
+ */
+ public FastLzFramedDecoder() {
+ this(false);
+ }
+
+ /**
+ * Creates a FastLZ decoder with calculation of checksums as specified.
+ *
+ * @param validateChecksums
+ * If true, the checksum field will be validated against the actual
+ * uncompressed data, and if the checksums do not match, a suitable
+ * {@link DecompressionException} will be thrown.
+ * Note, that in this case decoder will use {@link java.util.zip.Adler32}
+ * as a default checksum calculator.
+ */
+ public FastLzFramedDecoder(boolean validateChecksums) {
+ this(validateChecksums ? new Adler32() : null);
+ }
+
+ /**
+ * Creates a FastLZ decoder with specified checksum calculator.
+ *
+ * @param checksum
+ * the {@link Checksum} instance to use to check data for integrity.
+ * You may set {@code null} if you do not want to validate checksum of each block.
+ */
+ public FastLzFramedDecoder(Checksum checksum) {
+ this.checksum = checksum;
+ }
+
+ @Override
+ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
+ for (;;) {
+ try {
+ switch (currentState) {
+ case INIT_BLOCK:
+ if (in.readableBytes() < 4) {
+ return;
+ }
+
+ final int magic = in.readUnsignedMedium();
+ if (magic != MAGIC_NUMBER) {
+ throw new DecompressionException("unexpected block identifier");
+ }
+
+ final byte options = in.readByte();
+ isCompressed = (options & 0x01) == BLOCK_TYPE_COMPRESSED;
+ hasChecksum = (options & 0x10) == BLOCK_WITH_CHECKSUM;
+
+ currentState = State.INIT_BLOCK_PARAMS;
+ case INIT_BLOCK_PARAMS:
+ if (in.readableBytes() < 2 + (isCompressed ? 2 : 0) + (hasChecksum ? 4 : 0)) {
+ return;
+ }
+ currentChecksum = hasChecksum ? in.readInt() : 0;
+ chunkLength = in.readUnsignedShort();
+ originalLength = isCompressed ? in.readUnsignedShort() : chunkLength;
+
+ currentState = State.DECOMPRESS_DATA;
+ case DECOMPRESS_DATA:
+ final int chunkLength = this.chunkLength;
+ if (in.readableBytes() < chunkLength) {
+ return;
+ }
+
+ final int idx = in.readerIndex();
+ final int originalLength = this.originalLength;
+
+ ByteBuf uncompressed = ctx.alloc().heapBuffer(originalLength, originalLength);
+ final byte[] output = uncompressed.array();
+ final int outputPtr = uncompressed.arrayOffset() + uncompressed.writerIndex();
+
+ boolean success = false;
+ try {
+ if (isCompressed) {
+ final byte[] input;
+ final int inputPtr;
+ if (in.hasArray()) {
+ input = in.array();
+ inputPtr = in.arrayOffset() + idx;
+ } else {
+ input = new byte[chunkLength];
+ in.getBytes(idx, input);
+ inputPtr = 0;
+ }
+
+ final int decompressedBytes = decompress(input, inputPtr, chunkLength,
+ output, outputPtr, originalLength);
+ if (originalLength != decompressedBytes) {
+ throw new DecompressionException(String.format(
+ "stream corrupted: originalLength(%d) and actual length(%d) mismatch",
+ originalLength, decompressedBytes));
+ }
+ } else {
+ in.getBytes(idx, output, outputPtr, chunkLength);
+ }
+
+ final Checksum checksum = this.checksum;
+ if (hasChecksum && checksum != null) {
+ checksum.reset();
+ checksum.update(output, outputPtr, originalLength);
+ final int checksumResult = (int) checksum.getValue();
+ if (checksumResult != currentChecksum) {
+ throw new DecompressionException(String.format(
+ "stream corrupted: mismatching checksum: %d (expected: %d)",
+ checksumResult, currentChecksum));
+ }
+ }
+ uncompressed.writerIndex(uncompressed.writerIndex() + originalLength);
+ out.add(uncompressed);
+ in.skipBytes(chunkLength);
+
+ currentState = State.INIT_BLOCK;
+ success = true;
+ } finally {
+ if (!success) {
+ uncompressed.release();
+ }
+ }
+ break;
+ case CORRUPTED:
+ in.skipBytes(in.readableBytes());
+ return;
+ default:
+ throw new IllegalStateException();
+ }
+ } catch (Exception e) {
+ currentState = State.CORRUPTED;
+ throw e;
+ }
+ }
+ }
+}
diff --git a/codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedEncoder.java b/codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedEncoder.java
new file mode 100644
index 00000000000..a750d28bf2f
--- /dev/null
+++ b/codec/src/main/java/io/netty/handler/codec/compression/FastLzFramedEncoder.java
@@ -0,0 +1,186 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.compression;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.handler.codec.MessageToByteEncoder;
+
+import java.util.zip.Adler32;
+import java.util.zip.Checksum;
+
+import static io.netty.handler.codec.compression.FastLz.*;
+
+/**
+ * Compresses a {@link ByteBuf} using the FastLZ algorithm.
+ *
+ * See <a href="https://github.com/netty/netty/issues/2750">FastLZ format</a>.
+ */
+public class FastLzFramedEncoder extends MessageToByteEncoder<ByteBuf> {
+ /**
+ * Compression level.
+ */
+ private final int level;
+
+ /**
+ * Underlying checksum calculator in use.
+ */
+ private final Checksum checksum;
+
+ /**
+ * Creates a FastLZ encoder without checksum calculator and with auto detection of compression level.
+ */
+ public FastLzFramedEncoder() {
+ this(LEVEL_AUTO, null);
+ }
+
+ /**
+ * Creates a FastLZ encoder with specified compression level and without checksum calculator.
+ *
+ * @param level supports only these values:
+ * 0 - Encoder will choose level automatically depending on the length of the input buffer.
+ * 1 - Level 1 is the fastest compression and generally useful for short data.
+ * 2 - Level 2 is slightly slower but it gives better compression ratio.
+ */
+ public FastLzFramedEncoder(int level) {
+ this(level, null);
+ }
+
+ /**
+ * Creates a FastLZ encoder with auto detection of compression
+ * level and calculation of checksums as specified.
+ *
+ * @param validateChecksums
+ * If true, the checksum of each block will be calculated and this value
+ * will be added to the header of block.
+ * By default {@link FastLzFramedEncoder} uses {@link java.util.zip.Adler32}
+ * for checksum calculation.
+ */
+ public FastLzFramedEncoder(boolean validateChecksums) {
+ this(LEVEL_AUTO, validateChecksums ? new Adler32() : null);
+ }
+
+ /**
+ * Creates a FastLZ encoder with specified compression level and checksum calculator.
+ *
+ * @param level supports only these values:
+ * 0 - Encoder will choose level automatically depending on the length of the input buffer.
+ * 1 - Level 1 is the fastest compression and generally useful for short data.
+ * 2 - Level 2 is slightly slower but it gives better compression ratio.
+ * @param checksum
+ * the {@link Checksum} instance to use to check data for integrity.
+ * You may set {@code null} if you don't want to validate checksum of each block.
+ */
+ public FastLzFramedEncoder(int level, Checksum checksum) {
+ super(false);
+ if (level != LEVEL_AUTO && level != LEVEL_1 && level != LEVEL_2) {
+ throw new IllegalArgumentException(String.format(
+ "level: %d (expected: %d or %d or %d)", level, LEVEL_AUTO, LEVEL_1, LEVEL_2));
+ }
+ this.level = level;
+ this.checksum = checksum;
+ }
+
+ @Override
+ protected void encode(ChannelHandlerContext ctx, ByteBuf in, ByteBuf out) throws Exception {
+ final Checksum checksum = this.checksum;
+
+ for (;;) {
+ if (!in.isReadable()) {
+ return;
+ }
+ final int idx = in.readerIndex();
+ final int length = Math.min(in.readableBytes(), MAX_CHUNK_LENGTH);
+
+ final int outputIdx = out.writerIndex();
+ out.setMedium(outputIdx, MAGIC_NUMBER);
+ int outputOffset = outputIdx + CHECKSUM_OFFSET + (checksum != null ? 4 : 0);
+
+ final byte blockType;
+ final int chunkLength;
+ if (length < MIN_LENGTH_TO_COMPRESSION) {
+ blockType = BLOCK_TYPE_NON_COMPRESSED;
+
+ out.ensureWritable(outputOffset + 2 + length);
+ final byte[] output = out.array();
+ final int outputPtr = out.arrayOffset() + outputOffset + 2;
+
+ if (checksum != null) {
+ final byte[] input;
+ final int inputPtr;
+ if (in.hasArray()) {
+ input = in.array();
+ inputPtr = in.arrayOffset() + idx;
+ } else {
+ input = new byte[length];
+ in.getBytes(idx, input);
+ inputPtr = 0;
+ }
+
+ checksum.reset();
+ checksum.update(input, inputPtr, length);
+ out.setInt(outputIdx + CHECKSUM_OFFSET, (int) checksum.getValue());
+
+ System.arraycopy(input, inputPtr, output, outputPtr, length);
+ } else {
+ in.getBytes(idx, output, outputPtr, length);
+ }
+ chunkLength = length;
+ } else {
+ // try to compress
+ final byte[] input;
+ final int inputPtr;
+ if (in.hasArray()) {
+ input = in.array();
+ inputPtr = in.arrayOffset() + idx;
+ } else {
+ input = new byte[length];
+ in.getBytes(idx, input);
+ inputPtr = 0;
+ }
+
+ if (checksum != null) {
+ checksum.reset();
+ checksum.update(input, inputPtr, length);
+ out.setInt(outputIdx + CHECKSUM_OFFSET, (int) checksum.getValue());
+ }
+
+ final int maxOutputLength = calculateOutputBufferLength(length);
+ out.ensureWritable(outputOffset + 4 + maxOutputLength);
+ final byte[] output = out.array();
+ final int outputPtr = out.arrayOffset() + outputOffset + 4;
+ final int compressedLength = compress(input, inputPtr, length, output, outputPtr, level);
+ if (compressedLength < length) {
+ blockType = BLOCK_TYPE_COMPRESSED;
+ chunkLength = compressedLength;
+
+ out.setShort(outputOffset, chunkLength);
+ outputOffset += 2;
+ } else {
+ blockType = BLOCK_TYPE_NON_COMPRESSED;
+ System.arraycopy(input, inputPtr, output, outputPtr - 2, length);
+ chunkLength = length;
+ }
+ }
+ out.setShort(outputOffset, length);
+
+ out.setByte(outputIdx + OPTIONS_OFFSET,
+ blockType | (checksum != null ? BLOCK_WITH_CHECKSUM : BLOCK_WITHOUT_CHECKSUM));
+ out.writerIndex(outputOffset + 2 + chunkLength);
+ in.skipBytes(length);
+ }
+ }
+}
diff --git a/license/LICENSE.jfastlz.txt b/license/LICENSE.jfastlz.txt
new file mode 100644
index 00000000000..6f27e141f6b
--- /dev/null
+++ b/license/LICENSE.jfastlz.txt
@@ -0,0 +1,24 @@
+The MIT License
+
+Copyright (c) 2009 William Kinney
+
+Permission is hereby granted, free of charge, to any person
+obtaining a copy of this software and associated documentation
+files (the "Software"), to deal in the Software without
+restriction, including without limitation the rights to use,
+copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the
+Software is furnished to do so, subject to the following
+conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+OTHER DEALINGS IN THE SOFTWARE.
| diff --git a/codec/src/test/java/io/netty/handler/codec/compression/FastLzIntegrationTest.java b/codec/src/test/java/io/netty/handler/codec/compression/FastLzIntegrationTest.java
new file mode 100644
index 00000000000..0df309372ac
--- /dev/null
+++ b/codec/src/test/java/io/netty/handler/codec/compression/FastLzIntegrationTest.java
@@ -0,0 +1,136 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.codec.compression;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.embedded.EmbeddedChannel;
+import io.netty.util.ReferenceCountUtil;
+
+import static org.hamcrest.Matchers.*;
+import static org.junit.Assert.*;
+
+public class FastLzIntegrationTest extends IntegrationTest {
+
+ public static class TestWithChecksum extends IntegrationTest {
+
+ @Override
+ protected EmbeddedChannel createEncoderEmbeddedChannel() {
+ return new EmbeddedChannel(new FastLzFramedEncoder(true));
+ }
+
+ @Override
+ protected EmbeddedChannel createDecoderEmbeddedChannel() {
+ return new EmbeddedChannel(new FastLzFramedDecoder(true));
+ }
+ }
+
+ public static class TestRandomChecksum extends IntegrationTest {
+
+ @Override
+ protected EmbeddedChannel createEncoderEmbeddedChannel() {
+ return new EmbeddedChannel(new FastLzFramedEncoder(rand.nextBoolean()));
+ }
+
+ @Override
+ protected EmbeddedChannel createDecoderEmbeddedChannel() {
+ return new EmbeddedChannel(new FastLzFramedDecoder(rand.nextBoolean()));
+ }
+ }
+
+ @Override
+ protected EmbeddedChannel createEncoderEmbeddedChannel() {
+ return new EmbeddedChannel(new FastLzFramedEncoder(rand.nextBoolean()));
+ }
+
+ @Override
+ protected EmbeddedChannel createDecoderEmbeddedChannel() {
+ return new EmbeddedChannel(new FastLzFramedDecoder(rand.nextBoolean()));
+ }
+
+ @Override // test batched flow of data
+ protected void testIdentity(final byte[] data) {
+ final ByteBuf original = Unpooled.wrappedBuffer(data);
+ final EmbeddedChannel encoder = createEncoderEmbeddedChannel();
+ final EmbeddedChannel decoder = createDecoderEmbeddedChannel();
+
+ try {
+ int written = 0, length = rand.nextInt(100);
+ while (written + length < data.length) {
+ ByteBuf in = Unpooled.wrappedBuffer(data, written, length);
+ encoder.writeOutbound(in);
+ written += length;
+ length = rand.nextInt(100);
+ }
+ ByteBuf in = Unpooled.wrappedBuffer(data, written, data.length - written);
+ encoder.writeOutbound(in);
+ encoder.finish();
+
+ ByteBuf msg;
+ final CompositeByteBuf compressed = Unpooled.compositeBuffer();
+ while ((msg = encoder.readOutbound()) != null) {
+ compressed.addComponent(msg);
+ compressed.writerIndex(compressed.writerIndex() + msg.readableBytes());
+ }
+ assertThat(compressed, is(notNullValue()));
+
+ final byte[] compressedArray = new byte[compressed.readableBytes()];
+ compressed.readBytes(compressedArray);
+ written = 0;
+ length = rand.nextInt(100);
+ while (written + length < compressedArray.length) {
+ in = Unpooled.wrappedBuffer(compressedArray, written, length);
+ decoder.writeInbound(in);
+ written += length;
+ length = rand.nextInt(100);
+ }
+ in = Unpooled.wrappedBuffer(compressedArray, written, compressedArray.length - written);
+ decoder.writeInbound(in);
+
+ assertFalse(compressed.isReadable());
+ final CompositeByteBuf decompressed = Unpooled.compositeBuffer();
+ while ((msg = decoder.readInbound()) != null) {
+ decompressed.addComponent(msg);
+ decompressed.writerIndex(decompressed.writerIndex() + msg.readableBytes());
+ }
+ assertEquals(original, decompressed);
+
+ compressed.release();
+ decompressed.release();
+ original.release();
+ } finally {
+ encoder.close();
+ decoder.close();
+
+ for (;;) {
+ Object msg = encoder.readOutbound();
+ if (msg == null) {
+ break;
+ }
+ ReferenceCountUtil.release(msg);
+ }
+
+ for (;;) {
+ Object msg = decoder.readInbound();
+ if (msg == null) {
+ break;
+ }
+ ReferenceCountUtil.release(msg);
+ }
+ }
+ }
+}
| test | train | 2014-08-12T00:28:46 | 2014-08-07T21:59:54Z | idelpivnitskiy | val |
netty/netty/2741_2763 | netty/netty | netty/netty/2741 | netty/netty/2763 | [
"timestamp(timedelta=14.0, similarity=0.8792972262052686)"
] | f89907dba54bcd8d6d5d2c8f9badbec24b7e4ee0 | 45f407add67510ddda2dc7298da6505b13af1309 | [] | [
"no need for this.\n",
"just remove the @return and merge it with the previous line. Also please use {@code -1} to highlight it. \n",
"When there are two cases for return values, I find `@return` useful.\n\n```\nReturns the length of the input.\n@return the length of the input if the length of the input is know... | 2014-08-13T15:00:24Z | [
"feature"
] | Allow ChunkedInput to provide the progress information | Related issue: #2151
There is no way for `ChunkedWriteHandler` to know the progress of the transfer of a `ChannelInput`. Therefore, `ChannelProgressiveFutureListener` cannot get exact information about the progress of the transfer.
If you add a few methods that optionally provides the transfer progress to `ChannelInput`, it becomes possible for `ChunkedWriteHandler` to notify `ChannelProgressiveFutureListener`s.
If the input has no definite length, we can still use the progress so far, and consider the length of the input as 'undefined'.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpChunkedInput.java",
"codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoder.java",
"handler/src/main/java/io/netty/handler/stream/ChunkedFile.java",
"handler/src/main/java/io/netty/handler/stream/ChunkedInput.java",
"ha... | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpChunkedInput.java",
"codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoder.java",
"handler/src/main/java/io/netty/handler/stream/ChunkedFile.java",
"handler/src/main/java/io/netty/handler/stream/ChunkedInput.java",
"ha... | [
"handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java"
] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpChunkedInput.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpChunkedInput.java
index a6d317a1ff7..652ba6ae6fa 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpChunkedInput.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpChunkedInput.java
@@ -96,4 +96,14 @@ public HttpContent readChunk(ChannelHandlerContext ctx) throws Exception {
return new DefaultHttpContent(buf);
}
}
+
+ @Override
+ public long length() {
+ return input.length();
+ }
+
+ @Override
+ public long progress() {
+ return input.progress();
+ }
}
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoder.java
index e984b5f5979..044acf449d4 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoder.java
@@ -244,11 +244,14 @@ public void cleanFiles() {
* While adding a FileUpload, is the multipart currently in Mixed Mode
*/
private boolean duringMixedMode;
-
/**
* Global Body size
*/
private long globalBodySize;
+ /**
+ * Global Transfer progress
+ */
+ private long globalProgress;
/**
* True if this request is a Multipart request
@@ -997,7 +1000,9 @@ public HttpContent readChunk(ChannelHandlerContext ctx) throws Exception {
if (isLastChunkSent) {
return null;
} else {
- return nextChunk();
+ HttpContent nextChunk = nextChunk();
+ globalProgress += nextChunk.content().readableBytes();
+ return nextChunk;
}
}
@@ -1083,6 +1088,16 @@ public boolean isEndOfInput() throws Exception {
return isLastChunkSent;
}
+ @Override
+ public long length() {
+ return isMultipart? globalBodySize : globalBodySize - 1;
+ }
+
+ @Override
+ public long progress() {
+ return globalProgress;
+ }
+
/**
* Exception when an error occurs while encoding
*/
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedFile.java b/handler/src/main/java/io/netty/handler/stream/ChunkedFile.java
index aaa7c7bf960..69d1efb1419 100644
--- a/handler/src/main/java/io/netty/handler/stream/ChunkedFile.java
+++ b/handler/src/main/java/io/netty/handler/stream/ChunkedFile.java
@@ -161,4 +161,14 @@ public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception {
}
}
}
+
+ @Override
+ public long length() {
+ return endOffset - startOffset;
+ }
+
+ @Override
+ public long progress() {
+ return offset - startOffset;
+ }
}
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedInput.java b/handler/src/main/java/io/netty/handler/stream/ChunkedInput.java
index fa6fd85322c..4c44bf9efc5 100644
--- a/handler/src/main/java/io/netty/handler/stream/ChunkedInput.java
+++ b/handler/src/main/java/io/netty/handler/stream/ChunkedInput.java
@@ -47,4 +47,16 @@ public interface ChunkedInput<B> {
*/
B readChunk(ChannelHandlerContext ctx) throws Exception;
+ /**
+ * Returns the length of the input.
+ * @return the length of the input if the length of the input is known.
+ * a negative value if the length of the input is unknown.
+ */
+ long length();
+
+ /**
+ * Returns current transfer progress.
+ */
+ long progress();
+
}
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java b/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java
index 6032644c090..dbb0521d4dc 100644
--- a/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java
+++ b/handler/src/main/java/io/netty/handler/stream/ChunkedNioFile.java
@@ -172,4 +172,14 @@ public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception {
}
}
}
+
+ @Override
+ public long length() {
+ return endOffset - startOffset;
+ }
+
+ @Override
+ public long progress() {
+ return offset - startOffset;
+ }
}
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedNioStream.java b/handler/src/main/java/io/netty/handler/stream/ChunkedNioStream.java
index f6dcc754baf..fd59e74477b 100644
--- a/handler/src/main/java/io/netty/handler/stream/ChunkedNioStream.java
+++ b/handler/src/main/java/io/netty/handler/stream/ChunkedNioStream.java
@@ -128,4 +128,14 @@ public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception {
}
}
}
+
+ @Override
+ public long length() {
+ return -1;
+ }
+
+ @Override
+ public long progress() {
+ return offset;
+ }
}
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedStream.java b/handler/src/main/java/io/netty/handler/stream/ChunkedStream.java
index e50d4fbc575..bcaa6334537 100644
--- a/handler/src/main/java/io/netty/handler/stream/ChunkedStream.java
+++ b/handler/src/main/java/io/netty/handler/stream/ChunkedStream.java
@@ -120,4 +120,14 @@ public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception {
}
}
}
+
+ @Override
+ public long length() {
+ return -1;
+ }
+
+ @Override
+ public long progress() {
+ return offset;
+ }
}
diff --git a/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java b/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java
index 9f8ba6b1768..39bd22b109c 100644
--- a/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java
+++ b/handler/src/main/java/io/netty/handler/stream/ChunkedWriteHandler.java
@@ -179,7 +179,7 @@ private void discard(Throwable cause) {
}
currentWrite.fail(cause);
} else {
- currentWrite.success();
+ currentWrite.success(in.length());
}
closeInput(in);
} catch (Exception e) {
@@ -253,7 +253,6 @@ private void doFlush(final ChannelHandlerContext ctx) throws Exception {
message = Unpooled.EMPTY_BUFFER;
}
- final int amount = amount(message);
ChannelFuture f = ctx.write(message);
if (endOfInput) {
this.currentWrite = null;
@@ -266,8 +265,8 @@ private void doFlush(final ChannelHandlerContext ctx) throws Exception {
f.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
- currentWrite.progress(amount);
- currentWrite.success();
+ currentWrite.progress(chunks.progress(), chunks.length());
+ currentWrite.success(chunks.length());
closeInput(chunks);
}
});
@@ -279,7 +278,7 @@ public void operationComplete(ChannelFuture future) throws Exception {
closeInput((ChunkedInput<?>) pendingMessage);
currentWrite.fail(future.cause());
} else {
- currentWrite.progress(amount);
+ currentWrite.progress(chunks.progress(), chunks.length());
}
}
});
@@ -291,7 +290,7 @@ public void operationComplete(ChannelFuture future) throws Exception {
closeInput((ChunkedInput<?>) pendingMessage);
currentWrite.fail(future.cause());
} else {
- currentWrite.progress(amount);
+ currentWrite.progress(chunks.progress(), chunks.length());
if (channel.isWritable()) {
resumeTransfer();
}
@@ -327,7 +326,6 @@ static void closeInput(ChunkedInput<?> chunks) {
private static final class PendingWrite {
final Object msg;
final ChannelPromise promise;
- private long progress;
PendingWrite(Object msg, ChannelPromise promise) {
this.msg = msg;
@@ -339,7 +337,7 @@ void fail(Throwable cause) {
promise.tryFailure(cause);
}
- void success() {
+ void success(long total) {
if (promise.isDone()) {
// No need to notify the progress or fulfill the promise because it's done already.
return;
@@ -347,27 +345,16 @@ void success() {
if (promise instanceof ChannelProgressivePromise) {
// Now we know what the total is.
- ((ChannelProgressivePromise) promise).tryProgress(progress, progress);
+ ((ChannelProgressivePromise) promise).tryProgress(total, total);
}
promise.trySuccess();
}
- void progress(int amount) {
- progress += amount;
+ void progress(long progress, long total) {
if (promise instanceof ChannelProgressivePromise) {
- ((ChannelProgressivePromise) promise).tryProgress(progress, -1);
+ ((ChannelProgressivePromise) promise).tryProgress(progress, total);
}
}
}
-
- private static int amount(Object msg) {
- if (msg instanceof ByteBuf) {
- return ((ByteBuf) msg).readableBytes();
- }
- if (msg instanceof ByteBufHolder) {
- return ((ByteBufHolder) msg).content().readableBytes();
- }
- return 1;
- }
}
| diff --git a/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java b/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java
index 54879929384..204e140f4b5 100644
--- a/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java
+++ b/handler/src/test/java/io/netty/handler/stream/ChunkedWriteHandlerTest.java
@@ -124,6 +124,16 @@ public ByteBuf readChunk(ChannelHandlerContext ctx) throws Exception {
done = true;
return buffer.duplicate().retain();
}
+
+ @Override
+ public long length() {
+ return -1;
+ }
+
+ @Override
+ public long progress() {
+ return 1;
+ }
};
final AtomicBoolean listenerNotified = new AtomicBoolean(false);
@@ -171,6 +181,16 @@ public Object readChunk(ChannelHandlerContext ctx) throws Exception {
done = true;
return 0;
}
+
+ @Override
+ public long length() {
+ return -1;
+ }
+
+ @Override
+ public long progress() {
+ return 1;
+ }
};
EmbeddedChannel ch = new EmbeddedChannel(new ChunkedWriteHandler());
| test | train | 2014-08-13T16:40:34 | 2014-08-05T20:46:21Z | trustin | val |
netty/netty/2022_2790 | netty/netty | netty/netty/2022 | netty/netty/2790 | [
"timestamp(timedelta=16.0, similarity=0.9867336731736773)"
] | 075190f7114c734b515db7302b84b48952d748f5 | acbb12c257df2df732ec653ff678ea39bf85f3c5 | [
"@normanmaurer Maybe it is already done? It seems can be assigned by `ChannelConfig`. \n",
"@plucury this is used for all. I may need this per ChannelHandlerContext.\n",
"@normanmaurer Ah, I see. So we need keep a `ByteBufAllocator` instance in `AbstractChannelHandlerContext`, right?\n",
"I don't think we ... | [] | 2014-08-19T10:00:37Z | [
"feature"
] | Allow to set a specific ByteBufAllocator for a ChannelHandlerContext | Sometimes it would be useful to use a different ByteBufAllocator for different ChannelHandlerContext's (and so the handlers which are assigned to them).
Maybe we can make it possible.
| [
"transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java",
"transport/src/main/java/io/netty/channel/ChannelHandlerContext.java"
] | [
"transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java",
"transport/src/main/java/io/netty/channel/ChannelHandlerContext.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java
index ad914d05eed..9d5ddd437e0 100644
--- a/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java
+++ b/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java
@@ -210,6 +210,7 @@ private static boolean isSkippable(
private final DefaultChannelPipeline pipeline;
private final String name;
private boolean removed;
+ private volatile ByteBufAllocator allocator;
final int skipFlags;
@@ -228,6 +229,12 @@ private static boolean isSkippable(
AbstractChannelHandlerContext(
DefaultChannelPipeline pipeline, ChannelHandlerInvoker invoker, String name, int skipFlags) {
+ this(pipeline, invoker, name, skipFlags, null);
+ }
+
+ AbstractChannelHandlerContext(
+ DefaultChannelPipeline pipeline, ChannelHandlerInvoker invoker, String name, int skipFlags,
+ ByteBufAllocator allocator) {
if (name == null) {
throw new NullPointerException("name");
@@ -238,6 +245,7 @@ private static boolean isSkippable(
this.name = name;
this.invoker = invoker;
this.skipFlags = skipFlags;
+ this.allocator = allocator;
}
/** Invocation initiated by {@link DefaultChannelPipeline#teardownAll()}}. */
@@ -281,7 +289,7 @@ public ChannelPipeline pipeline() {
@Override
public ByteBufAllocator alloc() {
- return channel().config().getAllocator();
+ return allocator == null? channel().config().getAllocator() : allocator;
}
@Override
@@ -585,4 +593,13 @@ ChannelHandlerInvoker unwrapInvoker() {
}
return invoker;
}
+
+ @Override
+ public void setAllocator(ByteBufAllocator allocator) {
+ if (allocator == null) {
+ throw new NullPointerException("allocator");
+ }
+
+ this.allocator = allocator;
+ }
}
diff --git a/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java b/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java
index 6f204603250..ad98197a032 100644
--- a/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java
+++ b/transport/src/main/java/io/netty/channel/ChannelHandlerContext.java
@@ -497,4 +497,10 @@ public interface ChannelHandlerContext extends AttributeMap {
*/
ChannelPromise voidPromise();
+ /**
+ * Assign {@link ByteBufAllocator}
+ * @param allocator
+ */
+ void setAllocator(ByteBufAllocator allocator);
+
}
| null | val | train | 2014-08-18T19:07:49 | 2013-11-30T12:57:37Z | normanmaurer | val |
netty/netty/2779_2802 | netty/netty | netty/netty/2779 | netty/netty/2802 | [
"timestamp(timedelta=2429.0, similarity=0.9154252787991187)"
] | 21bc279700928a568f1c75940f88fb27c43b9421 | 226bc9d49bba81c3f0866530f7ffd8960f870f55 | [
"The HTTP/1.x translation layer needs to be notified when ever a node's parent changes. The interface for `Http2Connection.Listener.streamPriorityChanged` lends itself to this but is not invoked for all parent change events. Should this be invoked in a different spot(s)? For example is the `DefaultHttp2OutboundF... | [
"@nmittler - I could use some feedback/advice from you here.\n",
"I think we might need to notify the listener in 2 passes: once before we make the changes and once after the changes are complete.\n\nNotifying the listener before the changes allows the flow controller to properly decrement pendingBytes for a node... | 2014-08-21T05:34:35Z | [] | HTTP/2 Priority Tree Event Notification | Through investigation and discussion in PR https://github.com/netty/netty/pull/2775 there needs to be more investigation into the current implementation of the `Http2Connection.Listener` interface with respect to tree restructuring and listener notification.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/cod... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/cod... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 8b4ec6f1feb..58dc6fe4285 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -15,22 +15,34 @@
package io.netty.handler.codec.http2;
+import static io.netty.handler.codec.http2.Http2CodecUtil.CONNECTION_STREAM_ID;
+import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.immediateRemovalPolicy;
+import static io.netty.handler.codec.http2.Http2Exception.format;
+import static io.netty.handler.codec.http2.Http2Exception.protocolError;
+import static io.netty.handler.codec.http2.Http2Stream.State.CLOSED;
+import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_LOCAL;
+import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_REMOTE;
+import static io.netty.handler.codec.http2.Http2Stream.State.IDLE;
+import static io.netty.handler.codec.http2.Http2Stream.State.OPEN;
+import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_LOCAL;
+import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_REMOTE;
import io.netty.handler.codec.http2.Http2StreamRemovalPolicy.Action;
import io.netty.util.collection.IntObjectHashMap;
import io.netty.util.collection.IntObjectMap;
import io.netty.util.collection.PrimitiveCollections;
+import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.LinkedHashSet;
+import java.util.List;
import java.util.Set;
-import static io.netty.handler.codec.http2.Http2CodecUtil.*;
-import static io.netty.handler.codec.http2.Http2Exception.*;
-import static io.netty.handler.codec.http2.Http2Stream.State.*;
-
/**
* Simple implementation of {@link Http2Connection}.
*/
@@ -47,7 +59,8 @@ public class DefaultHttp2Connection implements Http2Connection {
/**
* Creates a connection with an immediate stream removal policy.
*
- * @param server whether or not this end-point is the server-side of the HTTP/2 connection.
+ * @param server
+ * whether or not this end-point is the server-side of the HTTP/2 connection.
*/
public DefaultHttp2Connection(boolean server) {
this(server, immediateRemovalPolicy());
@@ -56,11 +69,12 @@ public DefaultHttp2Connection(boolean server) {
/**
* Creates a new connection with the given settings.
*
- * @param server whether or not this end-point is the server-side of the HTTP/2 connection.
- * @param removalPolicy the policy to be used for removal of closed stream.
+ * @param server
+ * whether or not this end-point is the server-side of the HTTP/2 connection.
+ * @param removalPolicy
+ * the policy to be used for removal of closed stream.
*/
- public DefaultHttp2Connection(boolean server,
- Http2StreamRemovalPolicy removalPolicy) {
+ public DefaultHttp2Connection(boolean server, Http2StreamRemovalPolicy removalPolicy) {
if (removalPolicy == null) {
throw new NullPointerException("removalPolicy");
}
@@ -147,7 +161,7 @@ private void removeStream(DefaultStream stream) {
// Remove it from the map and priority tree.
streamMap.remove(stream.id());
- ((DefaultStream) stream.parent()).removeChild(stream);
+ stream.parent().removeChild(stream);
}
private void activate(DefaultStream stream) {
@@ -264,7 +278,7 @@ public final int totalChildWeights() {
}
@Override
- public final Http2Stream parent() {
+ public final DefaultStream parent() {
return parent;
}
@@ -307,12 +321,10 @@ public final Http2Stream child(int streamId) {
}
@Override
- public Http2Stream setPriority(int parentStreamId, short weight, boolean exclusive)
- throws Http2Exception {
+ public Http2Stream setPriority(int parentStreamId, short weight, boolean exclusive) throws Http2Exception {
if (weight < MIN_WEIGHT || weight > MAX_WEIGHT) {
throw new IllegalArgumentException(String.format(
- "Invalid weight: %d. Must be between %d and %d (inclusive).", weight,
- MIN_WEIGHT, MAX_WEIGHT));
+ "Invalid weight: %d. Must be between %d and %d (inclusive).", weight, MIN_WEIGHT, MAX_WEIGHT));
}
// Get the parent stream.
@@ -324,52 +336,23 @@ public Http2Stream setPriority(int parentStreamId, short weight, boolean exclusi
// Already have a priority. Re-prioritize the stream.
weight(weight);
- boolean needToRestructure = newParent.isDescendantOf(this);
- DefaultStream oldParent = (DefaultStream) parent();
- try {
- if (newParent == oldParent && !exclusive) {
- // No changes were made to the tree structure.
- return this;
- }
-
- // Break off the priority branch from it's current parent.
- oldParent.removeChildBranch(this);
-
- if (needToRestructure) {
- // Adding a circular dependency (priority<->newParent). Break off the new
- // parent's branch and add it above this priority.
- ((DefaultStream) newParent.parent()).removeChildBranch(newParent);
- oldParent.addChild(newParent, false);
- }
-
- // Add the priority under the new parent.
- newParent.addChild(this, exclusive);
- return this;
- } finally {
- // Notify observers.
- if (needToRestructure) {
- notifyPrioritySubtreeChanged(this, newParent);
+ if (newParent != parent() || exclusive) {
+ List<ParentChangedEvent> events = null;
+ if (newParent.isDescendantOf(this)) {
+ events = new ArrayList<ParentChangedEvent>(2 + (exclusive ? newParent.children().size() : 0));
+ parent.takeChild(newParent, false, events);
} else {
- notifyPriorityChanged(this, oldParent);
+ events = new ArrayList<ParentChangedEvent>(1 + (exclusive ? newParent.children().size() : 0));
}
+ newParent.takeChild(this, exclusive, events);
+ notifyParentChanged(events);
}
- }
-
- private void notifyPriorityChanged(Http2Stream stream, Http2Stream previousParent) {
- for (Listener listener : listeners) {
- listener.streamPriorityChanged(stream, previousParent);
- }
- }
- private void notifyPrioritySubtreeChanged(Http2Stream stream, Http2Stream subtreeRoot) {
- for (Listener listener : listeners) {
- listener.streamPrioritySubtreeChanged(stream, subtreeRoot);
- }
+ return this;
}
@Override
- public Http2Stream verifyState(Http2Error error, State... allowedStates)
- throws Http2Exception {
+ public Http2Stream verifyState(Http2Error error, State... allowedStates) throws Http2Exception {
for (State allowedState : allowedStates) {
if (state == allowedState) {
return this;
@@ -381,14 +364,14 @@ public Http2Stream verifyState(Http2Error error, State... allowedStates)
@Override
public Http2Stream openForPush() throws Http2Exception {
switch (state) {
- case RESERVED_LOCAL:
- state = HALF_CLOSED_REMOTE;
- break;
- case RESERVED_REMOTE:
- state = HALF_CLOSED_LOCAL;
- break;
- default:
- throw protocolError("Attempting to open non-reserved stream for push");
+ case RESERVED_LOCAL:
+ state = HALF_CLOSED_REMOTE;
+ break;
+ case RESERVED_REMOTE:
+ state = HALF_CLOSED_LOCAL;
+ break;
+ default:
+ throw protocolError("Attempting to open non-reserved stream for push");
}
activate(this);
return this;
@@ -423,15 +406,15 @@ private void deactivate(DefaultStream stream) {
@Override
public Http2Stream closeLocalSide() {
switch (state) {
- case OPEN:
- state = HALF_CLOSED_LOCAL;
- notifyHalfClosed(this);
- break;
- case HALF_CLOSED_LOCAL:
- break;
- default:
- close();
- break;
+ case OPEN:
+ state = HALF_CLOSED_LOCAL;
+ notifyHalfClosed(this);
+ break;
+ case HALF_CLOSED_LOCAL:
+ break;
+ default:
+ close();
+ break;
}
return this;
}
@@ -439,15 +422,15 @@ public Http2Stream closeLocalSide() {
@Override
public Http2Stream closeRemoteSide() {
switch (state) {
- case OPEN:
- state = HALF_CLOSED_REMOTE;
- notifyHalfClosed(this);
- break;
- case HALF_CLOSED_REMOTE:
- break;
- default:
- close();
- break;
+ case OPEN:
+ state = HALF_CLOSED_REMOTE;
+ notifyHalfClosed(this);
+ break;
+ case HALF_CLOSED_REMOTE:
+ break;
+ default:
+ close();
+ break;
}
return this;
}
@@ -469,15 +452,21 @@ public final boolean localSideOpen() {
}
final DefaultEndpoint createdBy() {
- return localEndpoint.createdStreamId(id)? localEndpoint : remoteEndpoint;
+ return localEndpoint.createdStreamId(id) ? localEndpoint : remoteEndpoint;
}
final void weight(short weight) {
- if (parent != null && weight != this.weight) {
- int delta = weight - this.weight;
- parent.totalChildWeights += delta;
+ if (weight != this.weight) {
+ if (parent != null) {
+ int delta = weight - this.weight;
+ parent.totalChildWeights += delta;
+ }
+ final short oldWeight = this.weight;
+ this.weight = weight;
+ for (Listener l : listeners) {
+ l.onWeightChanged(this, oldWeight);
+ }
}
- this.weight = weight;
}
final IntObjectMap<DefaultStream> removeAllChildren() {
@@ -492,54 +481,101 @@ final IntObjectMap<DefaultStream> removeAllChildren() {
}
/**
- * Adds a child to this priority. If exclusive is set, any children of this node are moved
- * to being dependent on the child.
+ * Adds a child to this priority. If exclusive is set, any children of this node are moved to being dependent on
+ * the child.
*/
- final void addChild(DefaultStream child, boolean exclusive) {
+ final void takeChild(DefaultStream child, boolean exclusive, List<ParentChangedEvent> events) {
+ DefaultStream oldParent = child.parent();
+ events.add(new ParentChangedEvent(child, oldParent));
+ notifyParentChanging(child, this);
+ child.parent = this;
+
if (exclusive) {
// If it was requested that this child be the exclusive dependency of this node,
// move any previous children to the child node, becoming grand children
// of this node.
for (DefaultStream grandchild : removeAllChildren().values(DefaultStream.class)) {
- child.addChild(grandchild, false);
+ child.takeChild(grandchild, false, events);
}
}
- child.parent = this;
if (children.put(child.id(), child) == null) {
totalChildWeights += child.weight();
}
+
+ if (oldParent != null && oldParent.children.remove(child.id()) != null) {
+ oldParent.totalChildWeights -= child.weight();
+ }
}
/**
- * Removes the child priority and moves any of its dependencies to being direct dependencies
- * on this node.
+ * Removes the child priority and moves any of its dependencies to being direct dependencies on this node.
*/
final void removeChild(DefaultStream child) {
if (children.remove(child.id()) != null) {
+ List<ParentChangedEvent> events = new ArrayList<ParentChangedEvent>(1 + child.children.size());
+ events.add(new ParentChangedEvent(child, child.parent()));
+ notifyParentChanging(child, null);
child.parent = null;
totalChildWeights -= child.weight();
// Move up any grand children to be directly dependent on this node.
for (DefaultStream grandchild : child.children.values(DefaultStream.class)) {
- addChild(grandchild, false);
+ takeChild(grandchild, false, events);
}
+
+ notifyParentChanged(events);
}
}
+ }
+
+ private static IntObjectMap<DefaultStream> newChildMap() {
+ return new IntObjectHashMap<DefaultStream>(4);
+ }
+
+ /**
+ * Allows a correlation to be made between a stream and its old parent before a parent change occurs
+ */
+ private final class ParentChangedEvent {
+ private Http2Stream stream;
+ private Http2Stream oldParent;
/**
- * Removes the child priority but unlike {@link #removeChild}, leaves its branch unaffected.
+ * Create a new instance
+ * @param stream The stream who has had a parent change
+ * @param oldParent The previous parent
*/
- final void removeChildBranch(DefaultStream child) {
- if (children.remove(child.id()) != null) {
- child.parent = null;
- totalChildWeights -= child.weight();
+ public ParentChangedEvent(Http2Stream stream, Http2Stream oldParent) {
+ this.stream = stream;
+ this.oldParent = oldParent;
+ }
+
+ /**
+ * Notify all listeners of the tree change event
+ * @param l The listener to notify
+ */
+ public void notifyListener(Listener l) {
+ l.priorityTreeParentChanged(stream, oldParent);
+ }
+ }
+
+ /**
+ * Notify all listeners of the priority tree change events (in ascending order)
+ * @param events The events (top down order) which have changed
+ */
+ private void notifyParentChanged(List<ParentChangedEvent> events) {
+ for (int i = 0; i < events.size(); ++i) {
+ ParentChangedEvent event = events.get(i);
+ for (Listener l : listeners) {
+ event.notifyListener(l);
}
}
}
- private static IntObjectMap<DefaultStream> newChildMap() {
- return new IntObjectHashMap<DefaultStream>(4);
+ private void notifyParentChanging(Http2Stream stream, Http2Stream newParent) {
+ for (Listener l : listeners) {
+ l.priorityTreeParentChanging(stream, newParent);
+ }
}
/**
@@ -619,7 +655,7 @@ private final class DefaultEndpoint implements Endpoint {
public int nextStreamId() {
// For manually created client-side streams, 1 is reserved for HTTP upgrade, so
// start at 3.
- return nextStreamId > 1? nextStreamId : nextStreamId + 2;
+ return nextStreamId > 1 ? nextStreamId : nextStreamId + 2;
}
@Override
@@ -661,8 +697,7 @@ public boolean isServer() {
}
@Override
- public DefaultStream reservePushStream(int streamId, Http2Stream parent)
- throws Http2Exception {
+ public DefaultStream reservePushStream(int streamId, Http2Stream parent) throws Http2Exception {
if (parent == null) {
throw protocolError("Parent stream missing");
}
@@ -689,12 +724,15 @@ public DefaultStream reservePushStream(int streamId, Http2Stream parent)
private void addStream(DefaultStream stream) {
// Add the stream to the map and priority tree.
streamMap.put(stream.id(), stream);
- connectionStream.addChild(stream, false);
+ List<ParentChangedEvent> events = new ArrayList<ParentChangedEvent>(1);
+ connectionStream.takeChild(stream, false, events);
// Notify the observers of the event.
for (Listener listener : listeners) {
listener.streamAdded(stream);
}
+
+ notifyParentChanged(events);
}
@Override
@@ -732,7 +770,7 @@ public int lastStreamCreated() {
@Override
public int lastKnownStream() {
- return isGoAwayReceived()? lastKnownStream : lastStreamCreated;
+ return isGoAwayReceived() ? lastKnownStream : lastStreamCreated;
}
@Override
@@ -775,12 +813,11 @@ private void verifyStreamId(int streamId) throws Http2Exception {
throw protocolError("No more streams can be created on this connection");
}
if (streamId < nextStreamId) {
- throw protocolError("Request stream %d is behind the next expected stream %d",
- streamId, nextStreamId);
+ throw protocolError("Request stream %d is behind the next expected stream %d", streamId, nextStreamId);
}
if (!createdStreamId(streamId)) {
- throw protocolError("Request stream %d is not correct for %s connection", streamId,
- server ? "server" : "client");
+ throw protocolError("Request stream %d is not correct for %s connection", streamId, server ? "server"
+ : "client");
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
index 7fe9527f42d..815b7cafab4 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
@@ -39,8 +39,8 @@
public class DefaultHttp2OutboundFlowController implements Http2OutboundFlowController {
/**
- * A comparators that sorts priority nodes in ascending order by the amount of priority data
- * available for its subtree.
+ * A comparators that sorts priority nodes in ascending order by the amount of priority data available for its
+ * subtree.
*/
private static final Comparator<Http2Stream> DATA_WEIGHT = new Comparator<Http2Stream>() {
private static final int MAX_DATA_THRESHOLD = Integer.MAX_VALUE / 256;
@@ -76,8 +76,7 @@ public DefaultHttp2OutboundFlowController(Http2Connection connection, Http2Frame
this.frameWriter = frameWriter;
// Add a flow state for the connection.
- connection.connectionStream().outboundFlow(
- new OutboundFlowState(connection.connectionStream()));
+ connection.connectionStream().outboundFlow(new OutboundFlowState(connection.connectionStream()));
// Register for notification of new streams.
connection.addListener(new Http2ConnectionAdapter() {
@@ -104,19 +103,19 @@ public void streamInactive(Http2Stream stream) {
}
@Override
- public void streamPriorityChanged(Http2Stream stream, Http2Stream previousParent) {
- if (stream.parent() != previousParent) {
- // The parent changed, move the priority bytes to the new parent.
- int priorityBytes = state(stream).priorityBytes();
- state(previousParent).incrementPriorityBytes(-priorityBytes);
- state(stream.parent()).incrementPriorityBytes(priorityBytes);
+ public void priorityTreeParentChanged(Http2Stream stream, Http2Stream oldParent) {
+ Http2Stream parent = stream.parent();
+ if (parent != null) {
+ state(parent).incrementPriorityBytes(state(stream).priorityBytes());
}
}
@Override
- public void streamPrioritySubtreeChanged(Http2Stream stream, Http2Stream subtreeRoot) {
- // Reset the priority bytes for the entire subtree.
- resetSubtree(subtreeRoot);
+ public void priorityTreeParentChanging(Http2Stream stream, Http2Stream newParent) {
+ Http2Stream parent = stream.parent();
+ if (parent != null) {
+ state(parent).incrementPriorityBytes(-state(stream).priorityBytes());
+ }
}
});
}
@@ -216,8 +215,8 @@ private OutboundFlowState state(int streamId) {
}
/**
- * Attempts to get the {@link OutboundFlowState} for the given stream. If not available, raises
- * a {@code PROTOCOL_ERROR}.
+ * Attempts to get the {@link OutboundFlowState} for the given stream. If not available, raises a
+ * {@code PROTOCOL_ERROR}.
*/
private OutboundFlowState stateOrFail(int streamId) throws Http2Exception {
OutboundFlowState state = state(streamId);
@@ -243,26 +242,6 @@ private void flush() {
}
}
- /**
- * Resets the priority bytes for the given subtree following a restructuring of the priority
- * tree.
- */
- private void resetSubtree(Http2Stream subtree) {
- // Reset the state priority bytes for this node to its pending bytes and propagate the
- // delta required for this change up the tree. It's important to note that the total number
- // of priority bytes for this subtree hasn't changed. As we traverse the subtree we will
- // subtract off values from the parent of this tree, but we'll add them back later as we
- // traverse the rest of the subtree.
- OutboundFlowState state = state(subtree);
- int delta = state.pendingBytes - state.priorityBytes;
- state.incrementPriorityBytes(delta);
-
- // Now recurse this operation for each child.
- for (Http2Stream child : subtree.children()) {
- resetSubtree(child);
- }
- }
-
/**
* Writes as many pending bytes as possible, according to stream priority.
*/
@@ -279,10 +258,11 @@ private void writePendingBytes() throws Http2Exception {
}
/**
- * Recursively traverses the priority tree rooted at the given node. Attempts to write the
- * allowed bytes for the streams in this sub tree based on their weighted priorities.
+ * Recursively traverses the priority tree rooted at the given node. Attempts to write the allowed bytes for the
+ * streams in this sub tree based on their weighted priorities.
*
- * @param allowance an allowed number of bytes that may be written to the streams in this subtree
+ * @param allowance
+ * an allowed number of bytes that may be written to the streams in this subtree
*/
private void writeAllowedBytes(Http2Stream stream, int allowance) throws Http2Exception {
// Write the allowed bytes for this node. If not all of the allowance was used,
@@ -353,8 +333,7 @@ private void writeAllowedBytes(Http2Stream stream, int allowance) throws Http2Ex
int weight = next.weight();
// Determine the value (in bytes) of a single unit of weight.
- double dataToWeightRatio =
- min(unallocatedBytes, remainingWindow) / (double) remainingWeight;
+ double dataToWeightRatio = min(unallocatedBytes, remainingWindow) / (double) remainingWeight;
unallocatedBytes -= nextState.unallocatedPriorityBytes();
remainingWeight -= weight;
@@ -397,7 +376,7 @@ private void writeAllowedBytes(Http2Stream stream, int allowance) throws Http2Ex
/**
* The outbound flow control state for a single stream.
*/
- private final class OutboundFlowState implements FlowState {
+ final class OutboundFlowState implements FlowState {
private final Queue<Frame> pendingWriteQueue;
private final Http2Stream stream;
private int window = initialWindowSize;
@@ -405,7 +384,7 @@ private final class OutboundFlowState implements FlowState {
private int priorityBytes;
private int allocatedPriorityBytes;
- OutboundFlowState(Http2Stream stream) {
+ private OutboundFlowState(Http2Stream stream) {
this.stream = stream;
pendingWriteQueue = new ArrayDeque<Frame>(2);
}
@@ -416,13 +395,12 @@ public int window() {
}
/**
- * Increments the flow control window for this stream by the given delta and returns the new
- * value.
+ * Increments the flow control window for this stream by the given delta and returns the new value.
*/
- int incrementStreamWindow(int delta) throws Http2Exception {
+ private int incrementStreamWindow(int delta) throws Http2Exception {
if (delta > 0 && Integer.MAX_VALUE - delta < window) {
- throw new Http2StreamException(stream.id(), FLOW_CONTROL_ERROR,
- "Window size overflow for stream: " + stream.id());
+ throw new Http2StreamException(stream.id(), FLOW_CONTROL_ERROR, "Window size overflow for stream: "
+ + stream.id());
}
int previouslyStreamable = streamableBytes();
window += delta;
@@ -441,11 +419,10 @@ int writableWindow() {
}
/**
- * Returns the number of pending bytes for this node that will fit within the
- * {@link #window}. This is used for the priority algorithm to determine the aggregate total
- * for {@link #priorityBytes} at each node. Each node only takes into account it's stream
- * window so that when a change occurs to the connection window, these values need not
- * change (i.e. no tree traversal is required).
+ * Returns the number of pending bytes for this node that will fit within the {@link #window}. This is used for
+ * the priority algorithm to determine the aggregate total for {@link #priorityBytes} at each node. Each node
+ * only takes into account it's stream window so that when a change occurs to the connection window, these
+ * values need not change (i.e. no tree traversal is required).
*/
int streamableBytes() {
return max(0, min(pendingBytes, window));
@@ -461,30 +438,28 @@ int priorityBytes() {
/**
* Used by the priority algorithm to allocate bytes to this stream.
*/
- void allocatePriorityBytes(int bytes) {
+ private void allocatePriorityBytes(int bytes) {
allocatedPriorityBytes += bytes;
}
/**
- * Used by the priority algorithm to get the intermediate allocation of bytes to this
- * stream.
+ * Used by the priority algorithm to get the intermediate allocation of bytes to this stream.
*/
int allocatedPriorityBytes() {
return allocatedPriorityBytes;
}
/**
- * Used by the priority algorithm to determine the number of writable bytes that have not
- * yet been allocated.
+ * Used by the priority algorithm to determine the number of writable bytes that have not yet been allocated.
*/
- int unallocatedPriorityBytes() {
+ private int unallocatedPriorityBytes() {
return priorityBytes - allocatedPriorityBytes;
}
/**
* Creates a new frame with the given values but does not add it to the pending queue.
*/
- Frame newFrame(ChannelHandlerContext ctx, ChannelPromise promise, ByteBuf data,
+ private Frame newFrame(ChannelHandlerContext ctx, ChannelPromise promise, ByteBuf data,
int padding, boolean endStream) {
return new Frame(ctx, new ChannelPromiseAggregator(promise), data, padding, endStream);
}
@@ -497,8 +472,7 @@ boolean hasFrame() {
}
/**
- * Returns the the head of the pending queue, or {@code null} if empty or the current window
- * size is zero.
+ * Returns the the head of the pending queue, or {@code null} if empty or the current window size is zero.
*/
Frame peek() {
if (window > 0) {
@@ -510,23 +484,22 @@ Frame peek() {
/**
* Clears the pending queue and writes errors for each remaining frame.
*/
- void clear() {
+ private void clear() {
for (;;) {
Frame frame = pendingWriteQueue.poll();
if (frame == null) {
break;
}
- frame.writeError(format(STREAM_CLOSED,
- "Stream closed before write could take place"));
+ frame.writeError(format(STREAM_CLOSED, "Stream closed before write could take place"));
}
}
/**
- * Writes up to the number of bytes from the pending queue. May write less if limited by the
- * writable window, by the number of pending writes available, or because a frame does not
- * support splitting on arbitrary boundaries.
+ * Writes up to the number of bytes from the pending queue. May write less if limited by the writable window, by
+ * the number of pending writes available, or because a frame does not support splitting on arbitrary
+ * boundaries.
*/
- int writeBytes(int bytes) throws Http2Exception {
+ private int writeBytes(int bytes) throws Http2Exception {
int bytesWritten = 0;
if (!stream.localSideOpen()) {
return bytesWritten;
@@ -553,8 +526,7 @@ int writeBytes(int bytes) throws Http2Exception {
}
/**
- * Recursively increments the priority bytes for this branch in the priority tree starting
- * at the current node.
+ * Recursively increments the priority bytes for this branch in the priority tree starting at the current node.
*/
private void incrementPriorityBytes(int numBytes) {
if (numBytes != 0) {
@@ -604,9 +576,9 @@ void enqueue() {
}
/**
- * Increments the number of pending bytes for this node. If there was any change to the
- * number of bytes that fit into the stream window, then {@link #incrementPriorityBytes} to
- * recursively update this branch of the priority tree.
+ * Increments the number of pending bytes for this node. If there was any change to the number of bytes that
+ * fit into the stream window, then {@link #incrementPriorityBytes} to recursively update this branch of the
+ * priority tree.
*/
private void incrementPendingBytes(int numBytes) {
int previouslyStreamable = streamableBytes();
@@ -646,8 +618,8 @@ void write() throws Http2Exception {
}
/**
- * Discards this frame, writing an error. If this frame is in the pending queue, the
- * unwritten bytes are removed from this branch of the priority tree.
+ * Discards this frame, writing an error. If this frame is in the pending queue, the unwritten bytes are
+ * removed from this branch of the priority tree.
*/
void writeError(Http2Exception cause) {
decrementPendingBytes(data.readableBytes());
@@ -656,12 +628,13 @@ void writeError(Http2Exception cause) {
}
/**
- * Creates a new frame that is a view of this frame's data buffer starting at the
- * current read index with the given number of bytes. The reader index on the input
- * frame is then advanced by the number of bytes. The returned frame will not have
- * end-of-stream set and it will not be automatically placed in the pending queue.
+ * Creates a new frame that is a view of this frame's data buffer starting at the current read index with
+ * the given number of bytes. The reader index on the input frame is then advanced by the number of bytes.
+ * The returned frame will not have end-of-stream set and it will not be automatically placed in the pending
+ * queue.
*
- * @param maxBytes the maximum number of bytes that is allowed in the created frame.
+ * @param maxBytes
+ * the maximum number of bytes that is allowed in the created frame.
* @return the partial frame.
*/
Frame split(int maxBytes) {
@@ -673,8 +646,7 @@ Frame split(int maxBytes) {
}
/**
- * If this frame is in the pending queue, decrements the number of pending bytes for the
- * stream.
+ * If this frame is in the pending queue, decrements the number of pending bytes for the stream.
*/
void decrementPendingBytes(int bytes) {
if (enqueued) {
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
index 48c2167c27e..f8d9933164e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
@@ -57,33 +57,30 @@ interface Listener {
void streamRemoved(Http2Stream stream);
/**
- * Notifies the listener that the priority for the stream has changed. The parent of the
- * stream may have changed, so the previous parent is also provided.
- * <p>
- * Either this method or {@link #streamPrioritySubtreeChanged} will be called, but not both
- * for a single change. This method is called for simple priority changes. If a priority
- * change causes a circular dependency between the stream and one of its descendants, the
- * subtree must be restructured causing {@link #streamPrioritySubtreeChanged} instead.
- *
- * @param stream the stream for which the priority has changed.
- * @param previousParent the previous parent of the stream. May be the same as its current
- * parent if unchanged.
+ * Notifies the listener that a priority tree parent change has occurred. This method will be invoked
+ * in a top down order relative to the priority tree. This method will also be invoked after all tree
+ * structure changes have been made and the tree is in steady state relative to the priority change
+ * which caused the tree structure to change.
+ * @param stream The stream which had a parent change (new parent and children will be steady state)
+ * @param oldParent The old parent which {@code stream} used to be a child of (may be {@code null})
*/
- void streamPriorityChanged(Http2Stream stream, Http2Stream previousParent);
+ void priorityTreeParentChanged(Http2Stream stream, Http2Stream oldParent);
/**
- * Called when a priority change for a stream creates a circular dependency between the
- * stream and one of its descendants. This requires a restructuring of the priority tree.
- * <p>
- * Either this method or {@link #streamPriorityChanged} will be called, but not both for a
- * single change. For simple changes that do not cause the tree to be restructured,
- * {@link #streamPriorityChanged} will be called instead.
- *
- * @param stream the stream for which the priority has changed, causing the tree to be
- * restructured.
- * @param subtreeRoot the new root of the subtree that has changed.
+ * Notifies the listener that a parent dependency is about to change
+ * This is called while the tree is being restructured and so the tree
+ * structure is not necessarily steady state.
+ * @param stream The stream which the parent is about to change to {@code newParent}
+ * @param newParent The stream which will be the parent of {@code stream}
+ */
+ void priorityTreeParentChanging(Http2Stream stream, Http2Stream newParent);
+
+ /**
+ * Notifies the listener that the weight has changed for {@code stream}
+ * @param stream The stream which the weight has changed
+ * @param oldWeight The old weight for {@code stream}
*/
- void streamPrioritySubtreeChanged(Http2Stream stream, Http2Stream subtreeRoot);
+ void onWeightChanged(Http2Stream stream, short oldWeight);
/**
* Called when a GO_AWAY frame has either been sent or received for the connection.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java
index d349d2847da..5ac64ee2e09 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java
@@ -40,14 +40,18 @@ public void streamRemoved(Http2Stream stream) {
}
@Override
- public void streamPriorityChanged(Http2Stream stream, Http2Stream previousParent) {
+ public void goingAway() {
}
@Override
- public void streamPrioritySubtreeChanged(Http2Stream stream, Http2Stream subtreeRoot) {
+ public void priorityTreeParentChanged(Http2Stream stream, Http2Stream oldParent) {
}
@Override
- public void goingAway() {
+ public void priorityTreeParentChanging(Http2Stream stream, Http2Stream newParent) {
+ }
+
+ @Override
+ public void onWeightChanged(Http2Stream stream, short oldWeight) {
}
}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index 827f947a1dc..46596cd3f96 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -20,10 +20,24 @@
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.anyShort;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Matchers.isNull;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.reset;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
import io.netty.handler.codec.http2.Http2Stream.State;
+import java.util.Arrays;
+import java.util.List;
+
+import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
+import org.mockito.ArgumentCaptor;
+import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
/**
@@ -34,12 +48,16 @@ public class DefaultHttp2ConnectionTest {
private DefaultHttp2Connection server;
private DefaultHttp2Connection client;
+ @Mock
+ private Http2Connection.Listener clientListener;
+
@Before
public void setup() {
MockitoAnnotations.initMocks(this);
server = new DefaultHttp2Connection(true);
client = new DefaultHttp2Connection(false);
+ client.addListener(clientListener);
}
@Test(expected = Http2Exception.class)
@@ -258,6 +276,31 @@ public void insertExclusiveShouldAddNewLevel() throws Exception {
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
}
+ @Test
+ public void weightChangeWithNoTreeChangeShouldNotifyListeners() throws Http2Exception {
+ Http2Stream streamA = client.local().createStream(1, false);
+ Http2Stream streamB = client.local().createStream(3, false);
+ Http2Stream streamC = client.local().createStream(5, false);
+ Http2Stream streamD = client.local().createStream(7, false);
+
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, true);
+
+ assertEquals(4, client.numActiveStreams());
+
+ short oldWeight = streamD.weight();
+ short newWeight = (short) (oldWeight + 1);
+ reset(clientListener);
+ streamD.setPriority(streamD.parent().id(), newWeight, false);
+ verify(clientListener).onWeightChanged(eq(streamD), eq(oldWeight));
+ Assert.assertEquals(streamD.weight(), newWeight);
+ verify(clientListener, never()).priorityTreeParentChanging(any(Http2Stream.class),
+ any(Http2Stream.class));
+ verify(clientListener, never()).priorityTreeParentChanged(any(Http2Stream.class),
+ any(Http2Stream.class));
+ }
+
@Test
public void removeShouldRestructureTree() throws Exception {
Http2Stream streamA = client.local().createStream(1, false);
@@ -299,24 +342,55 @@ public void removeShouldRestructureTree() throws Exception {
@Test
public void circularDependencyShouldRestructureTree() throws Exception {
- // Using example from http://tools.ietf.org/html/draft-ietf-httpbis-http2-12#section-5.3.3
+ // Using example from http://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-5.3.3
+ // Initialize all the nodes
Http2Stream streamA = client.local().createStream(1, false);
+ verifyParentChanged(streamA, null);
Http2Stream streamB = client.local().createStream(3, false);
+ verifyParentChanged(streamB, null);
Http2Stream streamC = client.local().createStream(5, false);
+ verifyParentChanged(streamC, null);
Http2Stream streamD = client.local().createStream(7, false);
+ verifyParentChanged(streamD, null);
Http2Stream streamE = client.local().createStream(9, false);
+ verifyParentChanged(streamE, null);
Http2Stream streamF = client.local().createStream(11, false);
+ verifyParentChanged(streamF, null);
+ // Build the tree
streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamB), anyShort());
+ verifyParentChanged(streamB, client.connectionStream());
+ verifyParentChanging(streamB, client.connectionStream());
+
streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamC), anyShort());
+ verifyParentChanged(streamC, client.connectionStream());
+ verifyParentChanging(streamC, client.connectionStream());
+
streamD.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamD), anyShort());
+ verifyParentChanged(streamD, client.connectionStream());
+ verifyParentChanging(streamD, client.connectionStream());
+
streamE.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamE), anyShort());
+ verifyParentChanged(streamE, client.connectionStream());
+ verifyParentChanging(streamE, client.connectionStream());
+
streamF.setPriority(streamD.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamF), anyShort());
+ verifyParentChanged(streamF, client.connectionStream());
+ verifyParentChanging(streamF, client.connectionStream());
assertEquals(6, client.numActiveStreams());
// Non-exclusive re-prioritization of a->d.
+ reset(clientListener);
streamA.setPriority(streamD.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamA), anyShort());
+ verifyParentChanging(Arrays.asList(streamD, streamA), Arrays.asList(client.connectionStream(), streamD));
+ verifyParentsChanged(Arrays.asList(streamD, streamA), Arrays.asList(streamC, client.connectionStream()));
// Level 0
Http2Stream p = client.connectionStream();
@@ -358,26 +432,57 @@ public void circularDependencyShouldRestructureTree() throws Exception {
@Test
public void circularDependencyWithExclusiveShouldRestructureTree() throws Exception {
- // Using example from http://tools.ietf.org/html/draft-ietf-httpbis-http2-12#section-5.3.3
- // Although the expected output for the exclusive case has an error in the document. The
- // final dependency of C should be E (not F). This is fixed here.
+ // Using example from http://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-5.3.3
+ // Initialize all the nodes
Http2Stream streamA = client.local().createStream(1, false);
+ verifyParentChanged(streamA, null);
Http2Stream streamB = client.local().createStream(3, false);
+ verifyParentChanged(streamB, null);
Http2Stream streamC = client.local().createStream(5, false);
+ verifyParentChanged(streamC, null);
Http2Stream streamD = client.local().createStream(7, false);
+ verifyParentChanged(streamD, null);
Http2Stream streamE = client.local().createStream(9, false);
+ verifyParentChanged(streamE, null);
Http2Stream streamF = client.local().createStream(11, false);
+ verifyParentChanged(streamF, null);
+ // Build the tree
streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamB), anyShort());
+ verifyParentChanged(streamB, client.connectionStream());
+ verifyParentChanging(streamB, client.connectionStream());
+
streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamC), anyShort());
+ verifyParentChanged(streamC, client.connectionStream());
+ verifyParentChanging(streamC, client.connectionStream());
+
streamD.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamD), anyShort());
+ verifyParentChanged(streamD, client.connectionStream());
+ verifyParentChanging(streamD, client.connectionStream());
+
streamE.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamE), anyShort());
+ verifyParentChanged(streamE, client.connectionStream());
+ verifyParentChanging(streamE, client.connectionStream());
+
streamF.setPriority(streamD.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ verify(clientListener, never()).onWeightChanged(eq(streamF), anyShort());
+ verifyParentChanged(streamF, client.connectionStream());
+ verifyParentChanging(streamF, client.connectionStream());
assertEquals(6, client.numActiveStreams());
// Exclusive re-prioritization of a->d.
+ reset(clientListener);
streamA.setPriority(streamD.id(), DEFAULT_PRIORITY_WEIGHT, true);
+ verify(clientListener, never()).onWeightChanged(eq(streamA), anyShort());
+ verifyParentChanging(Arrays.asList(streamD, streamA, streamF),
+ Arrays.asList(client.connectionStream(), streamD, streamA));
+ verifyParentsChanged(Arrays.asList(streamD, streamA, streamF),
+ Arrays.asList(streamC, client.connectionStream(), streamD));
// Level 0
Http2Stream p = client.connectionStream();
@@ -416,4 +521,49 @@ public void circularDependencyWithExclusiveShouldRestructureTree() throws Except
assertEquals(0, p.numChildren());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
}
+
+ private void verifyParentChanging(List<Http2Stream> expectedArg1, List<Http2Stream> expectedArg2) {
+ Assert.assertTrue(expectedArg1.size() == expectedArg2.size());
+ ArgumentCaptor<Http2Stream> arg1Captor = ArgumentCaptor.forClass(Http2Stream.class);
+ ArgumentCaptor<Http2Stream> arg2Captor = ArgumentCaptor.forClass(Http2Stream.class);
+ verify(clientListener, times(expectedArg1.size())).priorityTreeParentChanging(arg1Captor.capture(),
+ arg2Captor.capture());
+ List<Http2Stream> capturedArg1 = arg1Captor.getAllValues();
+ List<Http2Stream> capturedArg2 = arg2Captor.getAllValues();
+ Assert.assertTrue(capturedArg1.size() == capturedArg2.size());
+ Assert.assertTrue(capturedArg1.size() == expectedArg1.size());
+ for (int i = 0; i < capturedArg1.size(); ++i) {
+ Assert.assertEquals(expectedArg1.get(i), capturedArg1.get(i));
+ Assert.assertEquals(expectedArg2.get(i), capturedArg2.get(i));
+ }
+ }
+
+ private void verifyParentsChanged(List<Http2Stream> expectedArg1, List<Http2Stream> expectedArg2) {
+ Assert.assertTrue(expectedArg1.size() == expectedArg2.size());
+ ArgumentCaptor<Http2Stream> arg1Captor = ArgumentCaptor.forClass(Http2Stream.class);
+ ArgumentCaptor<Http2Stream> arg2Captor = ArgumentCaptor.forClass(Http2Stream.class);
+ verify(clientListener, times(expectedArg1.size())).priorityTreeParentChanged(arg1Captor.capture(),
+ arg2Captor.capture());
+ List<Http2Stream> capturedArg1 = arg1Captor.getAllValues();
+ List<Http2Stream> capturedArg2 = arg2Captor.getAllValues();
+ Assert.assertTrue(capturedArg1.size() == capturedArg2.size());
+ Assert.assertTrue(capturedArg1.size() == expectedArg1.size());
+ for (int i = 0; i < capturedArg1.size(); ++i) {
+ Assert.assertEquals(expectedArg1.get(i), capturedArg1.get(i));
+ Assert.assertEquals(expectedArg2.get(i), capturedArg2.get(i));
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ private static <T> T streamEq(T stream) {
+ return (T) (stream == null ? isNull(Http2Stream.class) : eq(stream));
+ }
+
+ private void verifyParentChanging(Http2Stream stream, Http2Stream newParent) {
+ verify(clientListener).priorityTreeParentChanging(streamEq(stream), streamEq(newParent));
+ }
+
+ private void verifyParentChanged(Http2Stream stream, Http2Stream oldParent) {
+ verify(clientListener).priorityTreeParentChanged(streamEq(stream), streamEq(oldParent));
+ }
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
index efa647443e8..30930507844 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
@@ -31,6 +31,12 @@
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
+import io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController.OutboundFlowState;
+import io.netty.util.collection.IntObjectHashMap;
+import io.netty.util.collection.IntObjectMap;
+
+import java.util.Arrays;
+import java.util.List;
import org.junit.Before;
import org.junit.Test;
@@ -46,6 +52,7 @@ public class DefaultHttp2OutboundFlowControllerTest {
private static final int STREAM_B = 3;
private static final int STREAM_C = 5;
private static final int STREAM_D = 7;
+ private static final int STREAM_E = 9;
private DefaultHttp2OutboundFlowController controller;
@@ -193,8 +200,7 @@ public void connectionWindowUpdateShouldSendFrame() throws Http2Exception {
@Test
public void connectionWindowUpdateShouldSendPartialFrame() throws Http2Exception {
// Set the connection window size to zero.
- controller
- .updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
+ controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
ByteBuf data = dummyData(10);
send(STREAM_A, data);
@@ -252,8 +258,8 @@ public void streamWindowUpdateShouldSendPartialFrame() throws Http2Exception {
}
/**
- * In this test, we block A which allows bytes to be written by C and D. Here's a view of the
- * tree (stream A is blocked).
+ * In this test, we block A which allows bytes to be written by C and D. Here's a view of the tree (stream A is
+ * blocked).
*
* <pre>
* 0
@@ -303,8 +309,8 @@ public void blockedStreamShouldSpreadDataToChildren() throws Http2Exception {
}
/**
- * In this test, we block B which allows all bytes to be written by A. A should not share the
- * data with its children since it's not blocked.
+ * In this test, we block B which allows all bytes to be written by A. A should not share the data with its children
+ * since it's not blocked.
*
* <pre>
* 0
@@ -345,8 +351,8 @@ public void childrenShouldNotSendDataUntilParentBlocked() throws Http2Exception
}
/**
- * In this test, we block B which allows all bytes to be written by A. Once A is blocked, it
- * will spill over the remaining of its portion to its children.
+ * In this test, we block B which allows all bytes to be written by A. Once A is blocked, it will spill over the
+ * remaining of its portion to its children.
*
* <pre>
* 0
@@ -405,8 +411,8 @@ public void parentShouldWaterFallDataToChildren() throws Http2Exception {
* C D
* </pre>
*
- * We then re-prioritize D so that it's directly off of the connection and verify that A and D
- * split the written bytes between them.
+ * We then re-prioritize D so that it's directly off of the connection and verify that A and D split the written
+ * bytes between them.
*
* <pre>
* 0
@@ -452,8 +458,8 @@ public void reprioritizeShouldAdjustOutboundFlow() throws Http2Exception {
}
/**
- * In this test, we root all streams at the connection, and then verify that data is split
- * appropriately based on weight (all available data is the same).
+ * In this test, we root all streams at the connection, and then verify that data is split appropriately based on
+ * weight (all available data is the same).
*
* <pre>
* 0
@@ -515,8 +521,8 @@ public void writeShouldPreferHighestWeight() throws Http2Exception {
}
/**
- * In this test, we root all streams at the connection, and then verify that data is split
- * equally among the stream, since they all have the same weight.
+ * In this test, we root all streams at the connection, and then verify that data is split equally among the stream,
+ * since they all have the same weight.
*
* <pre>
* 0
@@ -567,6 +573,277 @@ public void samePriorityShouldWriteEqualData() throws Http2Exception {
assertEquals(333, dWritten);
}
+ /**
+ * In this test, we block all streams and verify the priority bytes for each sub tree at each node are correct
+ *
+ * <pre>
+ * [0]
+ * / \
+ * A B
+ * / \
+ * C D
+ * </pre>
+ */
+ @Test
+ public void subTreeBytesShouldBeCorrect() throws Http2Exception {
+ // Block the connection
+ controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
+
+ Http2Stream stream0 = connection.connectionStream();
+ Http2Stream streamA = connection.stream(STREAM_A);
+ Http2Stream streamB = connection.stream(STREAM_B);
+ Http2Stream streamC = connection.stream(STREAM_C);
+ Http2Stream streamD = connection.stream(STREAM_D);
+
+ // Send a bunch of data on each stream.
+ IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
+ streamSizes.put(STREAM_A, 400);
+ streamSizes.put(STREAM_B, 500);
+ streamSizes.put(STREAM_C, 600);
+ streamSizes.put(STREAM_D, 700);
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
+ verifyNoWrite(STREAM_A);
+ verifyNoWrite(STREAM_B);
+ verifyNoWrite(STREAM_C);
+ verifyNoWrite(STREAM_D);
+
+ OutboundFlowState state = state(stream0);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_A, STREAM_B, STREAM_C, STREAM_D)), state.priorityBytes());
+ state = state(streamA);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_A, STREAM_C, STREAM_D)), state.priorityBytes());
+ state = state(streamB);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_B)), state.priorityBytes());
+ state = state(streamC);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_C)), state.priorityBytes());
+ state = state(streamD);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_D)), state.priorityBytes());
+ }
+
+ /**
+ * In this test, we block all streams shift the priority tree and verify priority bytes for each subtree are correct
+ *
+ * <pre>
+ * [0]
+ * / \
+ * A B
+ * / \
+ * C D
+ * </pre>
+ *
+ * After the tree shift:
+ * <pre>
+ * [0]
+ * |
+ * A
+ * |
+ * B
+ * / \
+ * C D
+ * </pre>
+ */
+ @Test
+ public void subTreeBytesShouldBeCorrectWithRestructure() throws Http2Exception {
+ // Block the connection
+ controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
+
+ Http2Stream stream0 = connection.connectionStream();
+ Http2Stream streamA = connection.stream(STREAM_A);
+ Http2Stream streamB = connection.stream(STREAM_B);
+ Http2Stream streamC = connection.stream(STREAM_C);
+ Http2Stream streamD = connection.stream(STREAM_D);
+
+ // Send a bunch of data on each stream.
+ IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
+ streamSizes.put(STREAM_A, 400);
+ streamSizes.put(STREAM_B, 500);
+ streamSizes.put(STREAM_C, 600);
+ streamSizes.put(STREAM_D, 700);
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
+ verifyNoWrite(STREAM_A);
+ verifyNoWrite(STREAM_B);
+ verifyNoWrite(STREAM_C);
+ verifyNoWrite(STREAM_D);
+
+ streamB.setPriority(STREAM_A, DEFAULT_PRIORITY_WEIGHT, true);
+ OutboundFlowState state = state(stream0);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_A, STREAM_B, STREAM_C, STREAM_D)), state.priorityBytes());
+ state = state(streamA);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_A, STREAM_B, STREAM_C, STREAM_D)), state.priorityBytes());
+ state = state(streamB);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_B, STREAM_C, STREAM_D)), state.priorityBytes());
+ state = state(streamC);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_C)), state.priorityBytes());
+ state = state(streamD);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_D)), state.priorityBytes());
+ }
+
+ /**
+ * In this test, we block all streams and add a node to the priority tree and verify
+ *
+ * <pre>
+ * [0]
+ * / \
+ * A B
+ * / \
+ * C D
+ * </pre>
+ *
+ * After the tree shift:
+ * <pre>
+ * [0]
+ * / \
+ * A B
+ * |
+ * E
+ * / \
+ * C D
+ * </pre>
+ */
+ @Test
+ public void subTreeBytesShouldBeCorrectWithAddition() throws Http2Exception {
+ // Block the connection
+ controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
+
+ Http2Stream stream0 = connection.connectionStream();
+ Http2Stream streamA = connection.stream(STREAM_A);
+ Http2Stream streamB = connection.stream(STREAM_B);
+ Http2Stream streamC = connection.stream(STREAM_C);
+ Http2Stream streamD = connection.stream(STREAM_D);
+
+ Http2Stream streamE = connection.local().createStream(STREAM_E, false);
+ streamE.setPriority(STREAM_A, DEFAULT_PRIORITY_WEIGHT, true);
+
+ // Send a bunch of data on each stream.
+ IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
+ streamSizes.put(STREAM_A, 400);
+ streamSizes.put(STREAM_B, 500);
+ streamSizes.put(STREAM_C, 600);
+ streamSizes.put(STREAM_D, 700);
+ streamSizes.put(STREAM_E, 900);
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
+ send(STREAM_E, dummyData(streamSizes.get(STREAM_E)));
+ verifyNoWrite(STREAM_A);
+ verifyNoWrite(STREAM_B);
+ verifyNoWrite(STREAM_C);
+ verifyNoWrite(STREAM_D);
+ verifyNoWrite(STREAM_E);
+
+ OutboundFlowState state = state(stream0);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_A, STREAM_B, STREAM_C, STREAM_D, STREAM_E)), state.priorityBytes());
+ state = state(streamA);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_A, STREAM_E, STREAM_C, STREAM_D)), state.priorityBytes());
+ state = state(streamB);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_B)), state.priorityBytes());
+ state = state(streamC);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_C)), state.priorityBytes());
+ state = state(streamD);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_D)), state.priorityBytes());
+ state = state(streamE);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_E, STREAM_C, STREAM_D)), state.priorityBytes());
+ }
+
+ /**
+ * In this test, we block all streams and remove a node from the priority tree and verify
+ *
+ * <pre>
+ * [0]
+ * / \
+ * A B
+ * / \
+ * C D
+ * </pre>
+ *
+ * After the tree shift:
+ * <pre>
+ * [0]
+ * / | \
+ * C D B
+ * </pre>
+ */
+ @Test
+ public void subTreeBytesShouldBeCorrectWithRemoval() throws Http2Exception {
+ // Block the connection
+ controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
+
+ Http2Stream stream0 = connection.connectionStream();
+ Http2Stream streamA = connection.stream(STREAM_A);
+ Http2Stream streamB = connection.stream(STREAM_B);
+ Http2Stream streamC = connection.stream(STREAM_C);
+ Http2Stream streamD = connection.stream(STREAM_D);
+
+ // Send a bunch of data on each stream.
+ IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
+ streamSizes.put(STREAM_A, 400);
+ streamSizes.put(STREAM_B, 500);
+ streamSizes.put(STREAM_C, 600);
+ streamSizes.put(STREAM_D, 700);
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
+ verifyNoWrite(STREAM_A);
+ verifyNoWrite(STREAM_B);
+ verifyNoWrite(STREAM_C);
+ verifyNoWrite(STREAM_D);
+
+ streamA.close();
+
+ OutboundFlowState state = state(stream0);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_B, STREAM_C, STREAM_D)), state.priorityBytes());
+ state = state(streamA);
+ assertEquals(0, state.priorityBytes());
+ state = state(streamB);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_B)), state.priorityBytes());
+ state = state(streamC);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_C)), state.priorityBytes());
+ state = state(streamD);
+ assertEquals(calculateStreamSizeSum(streamSizes,
+ Arrays.asList(STREAM_D)), state.priorityBytes());
+ }
+
+ private static OutboundFlowState state(Http2Stream stream) {
+ return (OutboundFlowState) stream.outboundFlow();
+ }
+
+ private static int calculateStreamSizeSum(IntObjectMap<Integer> streamSizes, List<Integer> streamIds) {
+ int sum = 0;
+ for (int i = 0; i < streamIds.size(); ++i) {
+ Integer streamSize = streamSizes.get(streamIds.get(i));
+ if (streamSize != null) {
+ sum += streamSize;
+ }
+ }
+ return sum;
+ }
+
private void send(int streamId, ByteBuf data) throws Http2Exception {
controller.writeData(ctx, streamId, data, 0, false, promise);
}
@@ -576,16 +853,15 @@ private void verifyWrite(int streamId, ByteBuf data) {
}
private void verifyNoWrite(int streamId) {
- verify(frameWriter, never()).writeData(eq(ctx), eq(streamId), any(ByteBuf.class), anyInt(),
- anyBoolean(), eq(promise));
+ verify(frameWriter, never()).writeData(eq(ctx), eq(streamId), any(ByteBuf.class), anyInt(), anyBoolean(),
+ eq(promise));
}
private void captureWrite(int streamId, ArgumentCaptor<ByteBuf> captor, boolean endStream) {
verify(frameWriter).writeData(eq(ctx), eq(streamId), captor.capture(), eq(0), eq(endStream), eq(promise));
}
- private void setPriority(int stream, int parent, int weight, boolean exclusive)
- throws Http2Exception {
+ private void setPriority(int stream, int parent, int weight, boolean exclusive) throws Http2Exception {
connection.stream(stream).setPriority(parent, (short) weight, exclusive);
}
| train | train | 2014-08-21T20:49:12 | 2014-08-17T17:30:28Z | Scottmitch | val |
netty/netty/2497_2805 | netty/netty | netty/netty/2497 | netty/netty/2805 | [
"timestamp(timedelta=25.0, similarity=0.9598137742447445)"
] | d3538dee2ecdcb7254f44e5974225ec7ce0655d3 | 91ba156f29024fc5f0b1a8dc4a2bb34a17c917e5 | [
"@nmittler is this still something we need ?\n",
"@normanmaurer yeah we're still out of spec WRT padding. It's low priority though and can wait until after we get draft 14 support.\n",
"Draft 14 is tracked in issue https://github.com/netty/netty/issues/2715. \n",
"Fixed by #2805\n"
] | [
"Isn't this still deferring all the padding until the last frame on the split?\n",
"I thought this was to address the TODO, but I see the PR description does not focus on addressing how to split padding. If this does not address the TODO then leave it in there until we cover it.\n",
"Should we open another iss... | 2014-08-21T20:49:28Z | [] | Add padding length to HTTP/2 flow control | We're currently out-of-spec with HTTP/2, which requires that any padding be included in the flow-controlled bytes.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2InboundFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2InboundFlowController.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowControllerTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowController.java
index 83bb670b11d..dd9c0427983 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowController.java
@@ -68,7 +68,7 @@ public int initialInboundWindowSize() {
public void applyInboundFlowControl(int streamId, ByteBuf data, int padding,
boolean endOfStream, FrameWriter frameWriter)
throws Http2Exception {
- int dataLength = data.readableBytes();
+ int dataLength = data.readableBytes() + padding;
applyConnectionFlowControl(dataLength, frameWriter);
applyStreamFlowControl(streamId, dataLength, endOfStream, frameWriter);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
index 815b7cafab4..63a41775948 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
@@ -28,10 +28,14 @@
import java.util.List;
import java.util.Queue;
-import static io.netty.handler.codec.http2.Http2CodecUtil.*;
-import static io.netty.handler.codec.http2.Http2Error.*;
-import static io.netty.handler.codec.http2.Http2Exception.*;
-import static java.lang.Math.*;
+import static io.netty.handler.codec.http2.Http2CodecUtil.CONNECTION_STREAM_ID;
+import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_WINDOW_SIZE;
+import static io.netty.handler.codec.http2.Http2Error.FLOW_CONTROL_ERROR;
+import static io.netty.handler.codec.http2.Http2Error.STREAM_CLOSED;
+import static io.netty.handler.codec.http2.Http2Exception.format;
+import static io.netty.handler.codec.http2.Http2Exception.protocolError;
+import static java.lang.Math.max;
+import static java.lang.Math.min;
/**
* Basic implementation of {@link Http2OutboundFlowController}.
@@ -178,7 +182,7 @@ public ChannelFuture writeData(ChannelHandlerContext ctx, int streamId, ByteBuf
int window = state.writableWindow();
OutboundFlowState.Frame frame = state.newFrame(ctx, promise, data, padding, endStream);
- if (window >= data.readableBytes()) {
+ if (window >= frame.size()) {
// Window size is large enough to send entire data frame
frame.write();
ctx.flush();
@@ -561,8 +565,11 @@ private final class Frame {
promiseAggregator.add(promise);
}
+ /**
+ * Gets the total size (in bytes) of this frame including the data and padding.
+ */
int size() {
- return data.readableBytes();
+ return data.readableBytes() + padding;
}
void enqueue() {
@@ -571,7 +578,7 @@ void enqueue() {
pendingWriteQueue.offer(this);
// Increment the number of pending bytes for this stream.
- incrementPendingBytes(data.readableBytes());
+ incrementPendingBytes(size());
}
}
@@ -599,13 +606,13 @@ void write() throws Http2Exception {
// Using a do/while loop because if the buffer is empty we still need to call
// the writer once to send the empty frame.
do {
- int bytesToWrite = data.readableBytes();
+ int bytesToWrite = size();
int frameBytes = Math.min(bytesToWrite, frameWriter.maxFrameSize());
if (frameBytes == bytesToWrite) {
// All the bytes fit into a single HTTP/2 frame, just send it all.
connectionState().incrementStreamWindow(-bytesToWrite);
incrementStreamWindow(-bytesToWrite);
- ByteBuf slice = data.readSlice(bytesToWrite);
+ ByteBuf slice = data.readSlice(data.readableBytes());
frameWriter.writeData(ctx, stream.id(), slice, padding, endStream, promise);
decrementPendingBytes(bytesToWrite);
return;
@@ -622,26 +629,34 @@ void write() throws Http2Exception {
* removed from this branch of the priority tree.
*/
void writeError(Http2Exception cause) {
- decrementPendingBytes(data.readableBytes());
+ decrementPendingBytes(size());
data.release();
promise.setFailure(cause);
}
/**
- * Creates a new frame that is a view of this frame's data buffer starting at the current read index with
- * the given number of bytes. The reader index on the input frame is then advanced by the number of bytes.
- * The returned frame will not have end-of-stream set and it will not be automatically placed in the pending
- * queue.
+ * Creates a new frame that is a view of this frame's data. The {@code maxBytes} are
+ * first split from the data buffer. If not all the requested bytes are available, the
+ * remaining bytes are then split from the padding (if available).
*
* @param maxBytes
* the maximum number of bytes that is allowed in the created frame.
* @return the partial frame.
*/
Frame split(int maxBytes) {
- // TODO: Should padding be included in the chunks or only the last frame?
- maxBytes = min(maxBytes, data.readableBytes());
- Frame frame = new Frame(ctx, promiseAggregator, data.readSlice(maxBytes).retain(), 0, false);
- decrementPendingBytes(maxBytes);
+ // TODO: Should padding be spread across chunks or only at the end?
+
+ // Get the portion of the data buffer to be split. Limit to the readable bytes.
+ int dataSplit = min(maxBytes, data.readableBytes());
+
+ // Split any remaining bytes from the padding.
+ int paddingSplit = min(maxBytes - dataSplit, padding);
+
+ ByteBuf splitSlice = data.readSlice(dataSplit).retain();
+ Frame frame = new Frame(ctx, promiseAggregator, splitSlice, paddingSplit, false);
+
+ int totalBytesSplit = dataSplit + paddingSplit;
+ decrementPendingBytes(totalBytesSplit);
return frame;
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2InboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2InboundFlowController.java
index 9deafb21157..7be4506cadf 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2InboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2InboundFlowController.java
@@ -24,6 +24,7 @@ public interface Http2InboundFlowController {
/**
* A writer of window update frames.
+ * TODO: Use Http2FrameWriter instead.
*/
interface FrameWriter {
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowControllerTest.java
index d2bffd06843..d3b5d326de9 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2InboundFlowControllerTest.java
@@ -59,13 +59,14 @@ public void setup() throws Http2Exception {
@Test
public void dataFrameShouldBeAccepted() throws Http2Exception {
- applyFlowControl(10, false);
+ applyFlowControl(10, 0, false);
verifyWindowUpdateNotSent();
}
@Test(expected = Http2Exception.class)
public void connectionFlowControlExceededShouldThrow() throws Http2Exception {
- applyFlowControl(DEFAULT_WINDOW_SIZE + 1, true);
+ // Window exceeded because of the padding.
+ applyFlowControl(DEFAULT_WINDOW_SIZE, 1, true);
}
@Test
@@ -75,7 +76,7 @@ public void halfWindowRemainingShouldUpdateConnectionWindow() throws Http2Except
int windowDelta = DEFAULT_WINDOW_SIZE - newWindow;
// Set end-of-stream on the frame, so no window update will be sent for the stream.
- applyFlowControl(dataSize, true);
+ applyFlowControl(dataSize, 0, true);
verify(frameWriter).writeFrame(eq(CONNECTION_STREAM_ID), eq(windowDelta));
}
@@ -86,7 +87,7 @@ public void halfWindowRemainingShouldUpdateAllWindows() throws Http2Exception {
int windowDelta = getWindowDelta(initialWindowSize, initialWindowSize, dataSize);
// Don't set end-of-stream so we'll get a window update for the stream as well.
- applyFlowControl(dataSize, false);
+ applyFlowControl(dataSize, 0, false);
verify(frameWriter).writeFrame(eq(CONNECTION_STREAM_ID), eq(windowDelta));
verify(frameWriter).writeFrame(eq(STREAM_ID), eq(windowDelta));
}
@@ -95,7 +96,7 @@ public void halfWindowRemainingShouldUpdateAllWindows() throws Http2Exception {
public void initialWindowUpdateShouldAllowMoreFrames() throws Http2Exception {
// Send a frame that takes up the entire window.
int initialWindowSize = DEFAULT_WINDOW_SIZE;
- applyFlowControl(initialWindowSize, false);
+ applyFlowControl(initialWindowSize, 0, false);
// Update the initial window size to allow another frame.
int newInitialWindowSize = 2 * initialWindowSize;
@@ -105,7 +106,7 @@ public void initialWindowUpdateShouldAllowMoreFrames() throws Http2Exception {
reset(frameWriter);
// Send the next frame and verify that the expected window updates were sent.
- applyFlowControl(initialWindowSize, false);
+ applyFlowControl(initialWindowSize, 0, false);
int delta = newInitialWindowSize - initialWindowSize;
verify(frameWriter).writeFrame(eq(CONNECTION_STREAM_ID), eq(delta));
verify(frameWriter).writeFrame(eq(STREAM_ID), eq(delta));
@@ -116,9 +117,9 @@ private static int getWindowDelta(int initialSize, int windowSize, int dataSize)
return initialSize - newWindowSize;
}
- private void applyFlowControl(int dataSize, boolean endOfStream) throws Http2Exception {
+ private void applyFlowControl(int dataSize, int padding, boolean endOfStream) throws Http2Exception {
ByteBuf buf = dummyData(dataSize);
- controller.applyInboundFlowControl(STREAM_ID, buf, 0, endOfStream, frameWriter);
+ controller.applyInboundFlowControl(STREAM_ID, buf, padding, endOfStream, frameWriter);
buf.release();
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
index 30930507844..cf5147b5f32 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
@@ -91,91 +91,119 @@ public void setup() throws Http2Exception {
@Test
public void frameShouldBeSentImmediately() throws Http2Exception {
- ByteBuf data = dummyData(10);
- send(STREAM_A, data.slice());
- verifyWrite(STREAM_A, data);
+ ByteBuf data = dummyData(5, 5);
+ send(STREAM_A, data.slice(0, 5), 5);
+ verifyWrite(STREAM_A, data.slice(0, 5), 5);
assertEquals(1, data.refCnt());
- data.release();
}
@Test
public void frameShouldSplitForMaxFrameSize() throws Http2Exception {
when(frameWriter.maxFrameSize()).thenReturn(5);
- ByteBuf data = dummyData(10);
- ByteBuf slice1 = data.slice(data.readerIndex(), 5);
+ ByteBuf data = dummyData(10, 0);
+ ByteBuf slice1 = data.slice(0, 5);
ByteBuf slice2 = data.slice(5, 5);
- send(STREAM_A, data.slice());
- verifyWrite(STREAM_A, slice1);
- verifyWrite(STREAM_A, slice2);
+ send(STREAM_A, data.slice(), 0);
+ verifyWrite(STREAM_A, slice1, 0);
+ verifyWrite(STREAM_A, slice2, 0);
assertEquals(2, data.refCnt());
- data.release(2);
}
@Test
public void stalledStreamShouldQueueFrame() throws Http2Exception {
controller.initialOutboundWindowSize(0);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data);
+ ByteBuf data = dummyData(10, 5);
+ send(STREAM_A, data.slice(0, 10), 5);
verifyNoWrite(STREAM_A);
assertEquals(1, data.refCnt());
- data.release();
}
@Test
- public void nonZeroWindowShouldSendPartialFrame() throws Http2Exception {
+ public void frameShouldSplit() throws Http2Exception {
controller.initialOutboundWindowSize(5);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data);
+ ByteBuf data = dummyData(5, 5);
+ send(STREAM_A, data.slice(0, 5), 5);
// Verify that a partial frame of 5 was sent.
ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, argument, false);
+ // None of the padding should be sent in the frame.
+ captureWrite(STREAM_A, argument, 0, false);
ByteBuf writtenBuf = argument.getValue();
assertEquals(5, writtenBuf.readableBytes());
assertEquals(data.slice(0, 5), writtenBuf);
assertEquals(2, writtenBuf.refCnt());
assertEquals(2, data.refCnt());
- data.release(2);
+ }
+
+ @Test
+ public void frameShouldSplitPadding() throws Http2Exception {
+ controller.initialOutboundWindowSize(5);
+
+ ByteBuf data = dummyData(3, 7);
+ send(STREAM_A, data.slice(0, 3), 7);
+
+ // Verify that a partial frame of 5 was sent.
+ ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
+ captureWrite(STREAM_A, argument, 2, false);
+ ByteBuf writtenBuf = argument.getValue();
+ assertEquals(3, writtenBuf.readableBytes());
+ assertEquals(data.slice(0, 3), writtenBuf);
+ assertEquals(2, writtenBuf.refCnt());
+ assertEquals(2, data.refCnt());
+ }
+
+ @Test
+ public void emptyFrameShouldSplitPadding() throws Http2Exception {
+ controller.initialOutboundWindowSize(5);
+
+ ByteBuf data = dummyData(0, 10);
+ send(STREAM_A, data.slice(0, 0), 10);
+
+ // Verify that a partial frame of 5 was sent.
+ ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
+ captureWrite(STREAM_A, argument, 5, false);
+ ByteBuf writtenBuf = argument.getValue();
+ assertEquals(0, writtenBuf.readableBytes());
+ assertEquals(1, writtenBuf.refCnt());
+ assertEquals(1, data.refCnt());
}
@Test
public void initialWindowUpdateShouldSendFrame() throws Http2Exception {
controller.initialOutboundWindowSize(0);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data.slice());
+ ByteBuf data = dummyData(10, 0);
+ send(STREAM_A, data.slice(), 0);
verifyNoWrite(STREAM_A);
// Verify that the entire frame was sent.
controller.initialOutboundWindowSize(10);
ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, argument, false);
+ captureWrite(STREAM_A, argument, 0, false);
ByteBuf writtenBuf = argument.getValue();
assertEquals(data, writtenBuf);
assertEquals(1, data.refCnt());
- data.release();
}
@Test
public void initialWindowUpdateShouldSendPartialFrame() throws Http2Exception {
controller.initialOutboundWindowSize(0);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data);
+ ByteBuf data = dummyData(10, 0);
+ send(STREAM_A, data, 0);
verifyNoWrite(STREAM_A);
// Verify that a partial frame of 5 was sent.
controller.initialOutboundWindowSize(5);
ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, argument, false);
+ captureWrite(STREAM_A, argument, 0, false);
ByteBuf writtenBuf = argument.getValue();
assertEquals(5, writtenBuf.readableBytes());
assertEquals(data.slice(0, 5), writtenBuf);
assertEquals(2, writtenBuf.refCnt());
assertEquals(2, data.refCnt());
- data.release(2);
}
@Test
@@ -183,18 +211,17 @@ public void connectionWindowUpdateShouldSendFrame() throws Http2Exception {
// Set the connection window size to zero.
controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data.slice());
+ ByteBuf data = dummyData(10, 0);
+ send(STREAM_A, data.slice(), 0);
verifyNoWrite(STREAM_A);
// Verify that the entire frame was sent.
controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, 10);
ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, argument, false);
+ captureWrite(STREAM_A, argument, 0, false);
ByteBuf writtenBuf = argument.getValue();
assertEquals(data, writtenBuf);
assertEquals(1, data.refCnt());
- data.release();
}
@Test
@@ -202,20 +229,19 @@ public void connectionWindowUpdateShouldSendPartialFrame() throws Http2Exception
// Set the connection window size to zero.
controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data);
+ ByteBuf data = dummyData(10, 0);
+ send(STREAM_A, data, 0);
verifyNoWrite(STREAM_A);
// Verify that a partial frame of 5 was sent.
controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, 5);
ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, argument, false);
+ captureWrite(STREAM_A, argument, 0, false);
ByteBuf writtenBuf = argument.getValue();
assertEquals(5, writtenBuf.readableBytes());
assertEquals(data.slice(0, 5), writtenBuf);
assertEquals(2, writtenBuf.refCnt());
assertEquals(2, data.refCnt());
- data.release(2);
}
@Test
@@ -223,18 +249,17 @@ public void streamWindowUpdateShouldSendFrame() throws Http2Exception {
// Set the stream window size to zero.
controller.updateOutboundWindowSize(STREAM_A, -DEFAULT_WINDOW_SIZE);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data.slice());
+ ByteBuf data = dummyData(10, 0);
+ send(STREAM_A, data.slice(), 0);
verifyNoWrite(STREAM_A);
// Verify that the entire frame was sent.
controller.updateOutboundWindowSize(STREAM_A, 10);
ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, argument, false);
+ captureWrite(STREAM_A, argument, 0, false);
ByteBuf writtenBuf = argument.getValue();
assertEquals(data, writtenBuf);
assertEquals(1, data.refCnt());
- data.release();
}
@Test
@@ -242,19 +267,51 @@ public void streamWindowUpdateShouldSendPartialFrame() throws Http2Exception {
// Set the stream window size to zero.
controller.updateOutboundWindowSize(STREAM_A, -DEFAULT_WINDOW_SIZE);
- ByteBuf data = dummyData(10);
- send(STREAM_A, data);
+ ByteBuf data = dummyData(10, 0);
+ send(STREAM_A, data, 0);
verifyNoWrite(STREAM_A);
// Verify that a partial frame of 5 was sent.
controller.updateOutboundWindowSize(STREAM_A, 5);
ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, argument, false);
+ captureWrite(STREAM_A, argument, 0, false);
ByteBuf writtenBuf = argument.getValue();
assertEquals(5, writtenBuf.readableBytes());
assertEquals(2, writtenBuf.refCnt());
assertEquals(2, data.refCnt());
- data.release(2);
+ }
+
+ /**
+ * In this test, we give stream A padding and verify that it's padding is properly split.
+ *
+ * <pre>
+ * 0
+ * / \
+ * A B
+ * </pre>
+ */
+ @Test
+ public void multipleStreamsShouldSplitPadding() throws Http2Exception {
+ // Block the connection
+ controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, -DEFAULT_WINDOW_SIZE);
+
+ // Try sending 10 bytes on each stream. They will be pending until we free up the
+ // connection.
+ send(STREAM_A, dummyData(3, 0), 7);
+ send(STREAM_B, dummyData(10, 0), 0);
+ verifyNoWrite(STREAM_A);
+ verifyNoWrite(STREAM_B);
+
+ // Open up the connection window.
+ controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, 10);
+ ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
+
+ // Verify that 5 bytes from A were written: 3 from data and 2 from padding.
+ captureWrite(STREAM_A, captor, 2, false);
+ assertEquals(3, captor.getValue().readableBytes());
+
+ captureWrite(STREAM_B, captor, 0, false);
+ assertEquals(5, captor.getValue().readableBytes());
}
/**
@@ -279,10 +336,10 @@ public void blockedStreamShouldSpreadDataToChildren() throws Http2Exception {
// Try sending 10 bytes on each stream. They will be pending until we free up the
// connection.
- send(STREAM_A, dummyData(10));
- send(STREAM_B, dummyData(10));
- send(STREAM_C, dummyData(10));
- send(STREAM_D, dummyData(10));
+ send(STREAM_A, dummyData(10, 0), 0);
+ send(STREAM_B, dummyData(10, 0), 0);
+ send(STREAM_C, dummyData(10, 0), 0);
+ send(STREAM_D, dummyData(10, 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -295,14 +352,14 @@ public void blockedStreamShouldSpreadDataToChildren() throws Http2Exception {
// Verify that no write was done for A, since it's blocked.
verifyNoWrite(STREAM_A);
- captureWrite(STREAM_B, captor, false);
+ captureWrite(STREAM_B, captor, 0, false);
assertEquals(5, captor.getValue().readableBytes());
// Verify that C and D each shared half of A's allowance. Since A's allowance (5) cannot
// be split evenly, one will get 3 and one will get 2.
- captureWrite(STREAM_C, captor, false);
+ captureWrite(STREAM_C, captor, 0, false);
int c = captor.getValue().readableBytes();
- captureWrite(STREAM_D, captor, false);
+ captureWrite(STREAM_D, captor, 0, false);
int d = captor.getValue().readableBytes();
assertEquals(5, c + d);
assertEquals(1, Math.abs(c - d));
@@ -329,10 +386,10 @@ public void childrenShouldNotSendDataUntilParentBlocked() throws Http2Exception
controller.updateOutboundWindowSize(STREAM_B, -DEFAULT_WINDOW_SIZE);
// Send 10 bytes to each.
- send(STREAM_A, dummyData(10));
- send(STREAM_B, dummyData(10));
- send(STREAM_C, dummyData(10));
- send(STREAM_D, dummyData(10));
+ send(STREAM_A, dummyData(10, 0), 0);
+ send(STREAM_B, dummyData(10, 0), 0);
+ send(STREAM_C, dummyData(10, 0), 0);
+ send(STREAM_D, dummyData(10, 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -343,7 +400,7 @@ public void childrenShouldNotSendDataUntilParentBlocked() throws Http2Exception
ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
// Verify that A received all the bytes.
- captureWrite(STREAM_A, captor, false);
+ captureWrite(STREAM_A, captor, 0, false);
assertEquals(10, captor.getValue().readableBytes());
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -371,10 +428,10 @@ public void parentShouldWaterFallDataToChildren() throws Http2Exception {
controller.updateOutboundWindowSize(STREAM_B, -DEFAULT_WINDOW_SIZE);
// Only send 5 to A so that it will allow data from its children.
- send(STREAM_A, dummyData(5));
- send(STREAM_B, dummyData(10));
- send(STREAM_C, dummyData(10));
- send(STREAM_D, dummyData(10));
+ send(STREAM_A, dummyData(5, 0), 0);
+ send(STREAM_B, dummyData(10, 0), 0);
+ send(STREAM_C, dummyData(10, 0), 0);
+ send(STREAM_D, dummyData(10, 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -387,14 +444,14 @@ public void parentShouldWaterFallDataToChildren() throws Http2Exception {
// Verify that no write was done for B, since it's blocked.
verifyNoWrite(STREAM_B);
- captureWrite(STREAM_A, captor, false);
+ captureWrite(STREAM_A, captor, 0, false);
assertEquals(5, captor.getValue().readableBytes());
// Verify that C and D each shared half of A's allowance. Since A's allowance (5) cannot
// be split evenly, one will get 3 and one will get 2.
- captureWrite(STREAM_C, captor, false);
+ captureWrite(STREAM_C, captor, 0, false);
int c = captor.getValue().readableBytes();
- captureWrite(STREAM_D, captor, false);
+ captureWrite(STREAM_D, captor, 0, false);
int d = captor.getValue().readableBytes();
assertEquals(5, c + d);
assertEquals(1, Math.abs(c - d));
@@ -432,10 +489,10 @@ public void reprioritizeShouldAdjustOutboundFlow() throws Http2Exception {
controller.updateOutboundWindowSize(STREAM_B, -DEFAULT_WINDOW_SIZE);
// Send 10 bytes to each.
- send(STREAM_A, dummyData(10));
- send(STREAM_B, dummyData(10));
- send(STREAM_C, dummyData(10));
- send(STREAM_D, dummyData(10));
+ send(STREAM_A, dummyData(10, 0), 0);
+ send(STREAM_B, dummyData(10, 0), 0);
+ send(STREAM_C, dummyData(10, 0), 0);
+ send(STREAM_D, dummyData(10, 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -449,9 +506,9 @@ public void reprioritizeShouldAdjustOutboundFlow() throws Http2Exception {
ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
// Verify that A received all the bytes.
- captureWrite(STREAM_A, captor, false);
+ captureWrite(STREAM_A, captor, 0, false);
assertEquals(5, captor.getValue().readableBytes());
- captureWrite(STREAM_D, captor, false);
+ captureWrite(STREAM_D, captor, 0, false);
assertEquals(5, captor.getValue().readableBytes());
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -479,10 +536,10 @@ public void writeShouldPreferHighestWeight() throws Http2Exception {
setPriority(STREAM_D, 0, (short) 100, false);
// Send a bunch of data on each stream.
- send(STREAM_A, dummyData(1000));
- send(STREAM_B, dummyData(1000));
- send(STREAM_C, dummyData(1000));
- send(STREAM_D, dummyData(1000));
+ send(STREAM_A, dummyData(1000, 0), 0);
+ send(STREAM_B, dummyData(1000, 0), 0);
+ send(STREAM_C, dummyData(1000, 0), 0);
+ send(STREAM_D, dummyData(1000, 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -492,22 +549,22 @@ public void writeShouldPreferHighestWeight() throws Http2Exception {
controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, 1000);
ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_A, captor, false);
+ captureWrite(STREAM_A, captor, 0, false);
int aWritten = captor.getValue().readableBytes();
int min = aWritten;
int max = aWritten;
- captureWrite(STREAM_B, captor, false);
+ captureWrite(STREAM_B, captor, 0, false);
int bWritten = captor.getValue().readableBytes();
min = Math.min(min, bWritten);
max = Math.max(max, bWritten);
- captureWrite(STREAM_C, captor, false);
+ captureWrite(STREAM_C, captor, 0, false);
int cWritten = captor.getValue().readableBytes();
min = Math.min(min, cWritten);
max = Math.max(max, cWritten);
- captureWrite(STREAM_D, captor, false);
+ captureWrite(STREAM_D, captor, 0, false);
int dWritten = captor.getValue().readableBytes();
min = Math.min(min, dWritten);
max = Math.max(max, dWritten);
@@ -542,29 +599,29 @@ public void samePriorityShouldWriteEqualData() throws Http2Exception {
setPriority(STREAM_D, 0, DEFAULT_PRIORITY_WEIGHT, false);
// Send a bunch of data on each stream.
- send(STREAM_A, dummyData(400));
- send(STREAM_B, dummyData(500));
- send(STREAM_C, dummyData(0));
- send(STREAM_D, dummyData(700));
+ send(STREAM_A, dummyData(400, 0), 0);
+ send(STREAM_B, dummyData(500, 0), 0);
+ send(STREAM_C, dummyData(0, 0), 0);
+ send(STREAM_D, dummyData(700, 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_D);
// The write will occur on C, because it's an empty frame.
ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
- captureWrite(STREAM_C, captor, false);
+ captureWrite(STREAM_C, captor, 0, false);
assertEquals(0, captor.getValue().readableBytes());
// Allow 1000 bytes to be sent.
controller.updateOutboundWindowSize(CONNECTION_STREAM_ID, 999);
- captureWrite(STREAM_A, captor, false);
+ captureWrite(STREAM_A, captor, 0, false);
int aWritten = captor.getValue().readableBytes();
- captureWrite(STREAM_B, captor, false);
+ captureWrite(STREAM_B, captor, 0, false);
int bWritten = captor.getValue().readableBytes();
- captureWrite(STREAM_D, captor, false);
+ captureWrite(STREAM_D, captor, 0, false);
int dWritten = captor.getValue().readableBytes();
assertEquals(999, aWritten + bWritten + dWritten);
@@ -601,10 +658,10 @@ public void subTreeBytesShouldBeCorrect() throws Http2Exception {
streamSizes.put(STREAM_B, 500);
streamSizes.put(STREAM_C, 600);
streamSizes.put(STREAM_D, 700);
- send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
- send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
- send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
- send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A), 0), 0);
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B), 0), 0);
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C), 0), 0);
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D), 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -666,10 +723,10 @@ public void subTreeBytesShouldBeCorrectWithRestructure() throws Http2Exception {
streamSizes.put(STREAM_B, 500);
streamSizes.put(STREAM_C, 600);
streamSizes.put(STREAM_D, 700);
- send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
- send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
- send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
- send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A), 0), 0);
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B), 0), 0);
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C), 0), 0);
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D), 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -736,11 +793,11 @@ public void subTreeBytesShouldBeCorrectWithAddition() throws Http2Exception {
streamSizes.put(STREAM_C, 600);
streamSizes.put(STREAM_D, 700);
streamSizes.put(STREAM_E, 900);
- send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
- send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
- send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
- send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
- send(STREAM_E, dummyData(streamSizes.get(STREAM_E)));
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A), 0), 0);
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B), 0), 0);
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C), 0), 0);
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D), 0), 0);
+ send(STREAM_E, dummyData(streamSizes.get(STREAM_E), 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -802,10 +859,10 @@ public void subTreeBytesShouldBeCorrectWithRemoval() throws Http2Exception {
streamSizes.put(STREAM_B, 500);
streamSizes.put(STREAM_C, 600);
streamSizes.put(STREAM_D, 700);
- send(STREAM_A, dummyData(streamSizes.get(STREAM_A)));
- send(STREAM_B, dummyData(streamSizes.get(STREAM_B)));
- send(STREAM_C, dummyData(streamSizes.get(STREAM_C)));
- send(STREAM_D, dummyData(streamSizes.get(STREAM_D)));
+ send(STREAM_A, dummyData(streamSizes.get(STREAM_A), 0), 0);
+ send(STREAM_B, dummyData(streamSizes.get(STREAM_B), 0), 0);
+ send(STREAM_C, dummyData(streamSizes.get(STREAM_C), 0), 0);
+ send(STREAM_D, dummyData(streamSizes.get(STREAM_D), 0), 0);
verifyNoWrite(STREAM_A);
verifyNoWrite(STREAM_B);
verifyNoWrite(STREAM_C);
@@ -844,12 +901,12 @@ private static int calculateStreamSizeSum(IntObjectMap<Integer> streamSizes, Lis
return sum;
}
- private void send(int streamId, ByteBuf data) throws Http2Exception {
- controller.writeData(ctx, streamId, data, 0, false, promise);
+ private void send(int streamId, ByteBuf data, int padding) throws Http2Exception {
+ controller.writeData(ctx, streamId, data, padding, false, promise);
}
- private void verifyWrite(int streamId, ByteBuf data) {
- verify(frameWriter).writeData(eq(ctx), eq(streamId), eq(data), eq(0), eq(false), eq(promise));
+ private void verifyWrite(int streamId, ByteBuf data, int padding) {
+ verify(frameWriter).writeData(eq(ctx), eq(streamId), eq(data), eq(padding), eq(false), eq(promise));
}
private void verifyNoWrite(int streamId) {
@@ -857,20 +914,22 @@ private void verifyNoWrite(int streamId) {
eq(promise));
}
- private void captureWrite(int streamId, ArgumentCaptor<ByteBuf> captor, boolean endStream) {
- verify(frameWriter).writeData(eq(ctx), eq(streamId), captor.capture(), eq(0), eq(endStream), eq(promise));
+ private void captureWrite(int streamId, ArgumentCaptor<ByteBuf> captor, int padding,
+ boolean endStream) {
+ verify(frameWriter).writeData(eq(ctx), eq(streamId), captor.capture(), eq(padding), eq(endStream), eq(promise));
}
private void setPriority(int stream, int parent, int weight, boolean exclusive) throws Http2Exception {
connection.stream(stream).setPriority(parent, (short) weight, exclusive);
}
- private static ByteBuf dummyData(int size) {
+ private static ByteBuf dummyData(int size, int padding) {
String repeatedData = "0123456789";
- ByteBuf buffer = Unpooled.buffer(size);
+ ByteBuf buffer = Unpooled.buffer(size + padding);
for (int index = 0; index < size; ++index) {
buffer.writeByte(repeatedData.charAt(index % repeatedData.length()));
}
+ buffer.writeZero(padding);
return buffer;
}
}
| train | train | 2014-08-22T22:07:22 | 2014-05-13T22:11:17Z | nmittler | val |
netty/netty/2826_2827 | netty/netty | netty/netty/2826 | netty/netty/2827 | [
"timestamp(timedelta=50.0, similarity=0.8558067741127687)"
] | 6409a5a1d55b1922b9d40ac8ed6c05a6e2261f47 | 7838e01a96c9e126600b0788fa8790114f035370 | [
"PR https://github.com/netty/netty/pull/2827 addresses this issue\n",
"@Scottmitch cherry-picked your change. Thanks!\n"
] | [] | 2014-08-26T20:53:15Z | [
"cleanup"
] | .gitignore missing bin/ directory | The netty project's .gitignore is missing an entry for `bin/`. This directory is created by eclipse and maintained by eclipse. Even though the IDE of choice is intellij the .gitignore should have a `bin/` entry under the Eclipse section to exclude this directory.
| [
".gitignore"
] | [
".gitignore"
] | [] | diff --git a/.gitignore b/.gitignore
index 8a80d67d0d9..822e92dda03 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,6 +2,7 @@
.project
.classpath
.settings
+bin/
# IntelliJ IDEA project files and directories
*.iml
| null | train | train | 2014-08-26T21:02:05 | 2014-08-26T20:45:18Z | Scottmitch | val |
netty/netty/2848_2849 | netty/netty | netty/netty/2848 | netty/netty/2849 | [
"timestamp(timedelta=25.0, similarity=0.905243869713583)"
] | cc55a71b14fbe1342ae0109bf54143f7d775c40d | 9c7c26873eb7b327f87b344dcc6891589a636a98 | [
"Pull request #2849 has a proposed fix.\n",
"Was fixed \n"
] | [
"you should only check for `==`. `size` can never be greater than `maxCapacity`.\n",
"`testMaxCapacity` and add a link to your bug report please.\n",
"maybe wrap everything in a loop that tests this with a few random `maxCapacity` values between [1,1000).\n",
"Well, that was actually part of the issue. The or... | 2014-08-30T07:40:42Z | [
"defect"
] | Recycler maxCapcity can be ignored | If `maxCapcity` is not a power of two and greater than 256, it's completely ignored by `Recycler`. This allows caching an infinite number of `Recycler` objects until memory is exhausted.
`Recycler.Stack.push()` only checks against `maxCapacity` when the `elements` array is full. However, that array is reallocated in powers of two. It starts out with 256 items and then goes to 512, 1024, etc. When the array is full, it checks if the current size _equals_ `maxCapacity` instead of greater than it.
So if `maxCapacity` is 500, the array size will grow to 512, it will fill up, and will never stop growing because `elements.length` will never be equal to `maxCapacity` at times when the array is full.
Here is a little piece of code to show this in action. When `maxCapacity` is 300, it will cache 1000 objects. When setting `maxCapcity` to 512, it will properly cache only 512 objects.
``` java
import io.netty.util.Recycler;
import io.netty.util.Recycler.Handle;
public class TestRecycler
{
static int counter;
public static class HandledObject
{
Handle handle;
HandledObject( Handle handle )
{
this.handle = handle;
counter++;
}
@Override
protected void finalize() throws Throwable
{
counter--;
}
}
public static void main( String[] args )
{
// int maxCapcity = 512;
int maxCapcity = 300;
Recycler<HandledObject> recycler = new Recycler<HandledObject>( maxCapcity )
{
@Override
protected HandledObject newObject( Handle handle )
{
return new HandledObject( handle );
}
};
HandledObject[] objects = new HandledObject[1000];
for( int i = 0; i < objects.length; i++ ) {
objects[i] = recycler.get();
}
for( int i = 0; i < objects.length; i++ ) {
recycler.recycle( objects[i], objects[i].handle );
objects[i] = null;
}
System.gc();
System.runFinalization();
System.out.println( "maxCapcity = " + maxCapcity + " and actual live objects = " + counter );
}
}
```
| [
"common/src/main/java/io/netty/util/Recycler.java"
] | [
"common/src/main/java/io/netty/util/Recycler.java"
] | [
"common/src/test/java/io/netty/util/RecyclerTest.java"
] | diff --git a/common/src/main/java/io/netty/util/Recycler.java b/common/src/main/java/io/netty/util/Recycler.java
index a56499cd7b8..0600f66d70f 100644
--- a/common/src/main/java/io/netty/util/Recycler.java
+++ b/common/src/main/java/io/netty/util/Recycler.java
@@ -96,6 +96,10 @@ public final boolean recycle(T o, Handle<T> handle) {
return true;
}
+ final int threadLocalCapacity() {
+ return threadLocal.get().elements.length;
+ }
+
protected abstract T newObject(Handle<T> handle);
public interface Handle<T> {
@@ -339,12 +343,12 @@ void push(DefaultHandle<?> item) {
item.recycleId = item.lastRecycledId = OWN_THREAD_ID;
int size = this.size;
+ if (size == maxCapacity) {
+ // Hit the maximum capacity - drop the possibly youngest object.
+ return;
+ }
if (size == elements.length) {
- if (size == maxCapacity) {
- // Hit the maximum capacity - drop the possibly youngest object.
- return;
- }
- elements = Arrays.copyOf(elements, size << 1);
+ elements = Arrays.copyOf(elements, Math.min(size << 1, maxCapacity));
}
elements[size] = item;
| diff --git a/common/src/test/java/io/netty/util/RecyclerTest.java b/common/src/test/java/io/netty/util/RecyclerTest.java
index 5212e9b4df5..91114344b6a 100644
--- a/common/src/test/java/io/netty/util/RecyclerTest.java
+++ b/common/src/test/java/io/netty/util/RecyclerTest.java
@@ -15,6 +15,8 @@
*/
package io.netty.util;
+import java.util.Random;
+
import org.junit.Assert;
import org.junit.Test;
@@ -59,4 +61,47 @@ public void recycle() {
RECYCLER.recycle(this, handle);
}
}
+
+ /**
+ * Test to make sure bug #2848 never happens again
+ * https://github.com/netty/netty/issues/2848
+ */
+ @Test
+ public void testMaxCapacity() {
+ testMaxCapacity(300);
+ Random rand = new Random();
+ for (int i = 0; i < 50; i++) {
+ testMaxCapacity(rand.nextInt(1000) + 256); // 256 - 1256
+ }
+ }
+
+ void testMaxCapacity(int maxCapacity) {
+ Recycler<HandledObject> recycler = new Recycler<HandledObject>(maxCapacity) {
+ @Override
+ protected HandledObject newObject(
+ Recycler.Handle<HandledObject> handle) {
+ return new HandledObject(handle);
+ }
+ };
+
+ HandledObject[] objects = new HandledObject[maxCapacity * 3];
+ for (int i = 0; i < objects.length; i++) {
+ objects[i] = recycler.get();
+ }
+
+ for (int i = 0; i < objects.length; i++) {
+ recycler.recycle(objects[i], objects[i].handle);
+ objects[i] = null;
+ }
+
+ Assert.assertEquals(maxCapacity, recycler.threadLocalCapacity());
+ }
+
+ static final class HandledObject {
+ Recycler.Handle<HandledObject> handle;
+
+ HandledObject(Recycler.Handle<HandledObject> handle) {
+ this.handle = handle;
+ }
+ }
}
| train | train | 2014-08-30T20:53:07 | 2014-08-30T07:33:03Z | kichik | val |
netty/netty/2719_2860 | netty/netty | netty/netty/2719 | netty/netty/2860 | [
"timestamp(timedelta=32.0, similarity=0.876096460955986)"
] | 6d1b96fb632026042b6e7009d39a8db7486944eb | 2fff75975674f9afc1bf134d393108a0b73a9c58 | [
"Added to 4.0, 4.1 and master \n"
] | [] | 2014-09-03T12:57:26Z | [
"feature"
] | Support gathering/scattering in EpollDatagramChannel | As @pcarrier pointed out on monday it is possible to do gathering writes even when using datagram/udp on linux. We should add support for it:
See http://man7.org/linux/man-pages/man2/send.2.html
| [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.jav... | [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.jav... | [
"testsuite/src/test/java/io/netty/testsuite/transport/socket/DatagramUnicastTest.java"
] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index 52be419d943..5dc7eb5825d 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -34,6 +34,15 @@
// optional
extern int accept4(int sockFd, struct sockaddr *addr, socklen_t *addrlen, int flags) __attribute__((weak));
extern int epoll_create1(int flags) __attribute__((weak));
+extern int sendmmsg(int sockfd, struct mmsghdr *msgvec, unsigned int vlen, unsigned int flags) __attribute__((weak));
+
+// Just define it here and NOT use #define _GNU_SOURCE as we also want to be able to build on systems that not support
+// sendmmsg yet. The problem is if we use _GNU_SOURCE we will not be able to declare sendmmsg as extern
+struct mmsghdr {
+ struct msghdr msg_hdr; /* Message header */
+ unsigned int msg_len; /* Number of bytes transmitted */
+};
+
// Those are initialized in the init(...) method and cached for performance reasons
jmethodID updatePosId = NULL;
@@ -44,7 +53,14 @@ jfieldID limitFieldId = NULL;
jfieldID fileChannelFieldId = NULL;
jfieldID transferedFieldId = NULL;
jfieldID fdFieldId = NULL;
-jfieldID fileDescriptorFieldId = NULL;;
+jfieldID fileDescriptorFieldId = NULL;
+
+jfieldID packetAddrFieldId = NULL;
+jfieldID packetScopeIdFieldId = NULL;
+jfieldID packetPortFieldId = NULL;
+jfieldID packetMemoryAddressFieldId = NULL;
+jfieldID packetCountFieldId = NULL;
+
jmethodID inetSocketAddrMethodId = NULL;
jmethodID datagramSocketAddrMethodId = NULL;
jclass runtimeExceptionClass = NULL;
@@ -53,6 +69,7 @@ jclass closedChannelExceptionClass = NULL;
jmethodID closedChannelExceptionMethodId = NULL;
jclass inetSocketAddressClass = NULL;
jclass datagramSocketAddressClass = NULL;
+jclass nativeDatagramPacketClass = NULL;
static int socketType;
static const char *ip4prefix = "::ffff:";
@@ -414,6 +431,38 @@ jint JNI_OnLoad(JavaVM* vm, void* reserved) {
throwRuntimeException(env, "Unable to obtain constructor of DatagramSocketAddress");
return JNI_ERR;
}
+ jclass nativeDatagramPacketCls = (*env)->FindClass(env, "io/netty/channel/epoll/NativeDatagramPacketArray$NativeDatagramPacket");
+ if (nativeDatagramPacketCls == NULL) {
+ // pending exception...
+ return JNI_ERR;
+ }
+
+ packetAddrFieldId = (*env)->GetFieldID(env, nativeDatagramPacketCls, "addr", "[B");
+ if (packetAddrFieldId == NULL) {
+ throwRuntimeException(env, "Unable to obtain addr field for NativeDatagramPacket");
+ return JNI_ERR;
+ }
+ packetScopeIdFieldId = (*env)->GetFieldID(env, nativeDatagramPacketCls, "scopeId", "I");
+ if (packetScopeIdFieldId == NULL) {
+ throwRuntimeException(env, "Unable to obtain scopeId field for NativeDatagramPacket");
+ return JNI_ERR;
+ }
+ packetPortFieldId = (*env)->GetFieldID(env, nativeDatagramPacketCls, "port", "I");
+ if (packetPortFieldId == NULL) {
+ throwRuntimeException(env, "Unable to obtain port field for NativeDatagramPacket");
+ return JNI_ERR;
+ }
+ packetMemoryAddressFieldId = (*env)->GetFieldID(env, nativeDatagramPacketCls, "memoryAddress", "J");
+ if (packetMemoryAddressFieldId == NULL) {
+ throwRuntimeException(env, "Unable to obtain memoryAddress field for NativeDatagramPacket");
+ return JNI_ERR;
+ }
+
+ packetCountFieldId = (*env)->GetFieldID(env, nativeDatagramPacketCls, "count", "I");
+ if (packetCountFieldId == NULL) {
+ throwRuntimeException(env, "Unable to obtain count field for NativeDatagramPacket");
+ return JNI_ERR;
+ }
return JNI_VERSION_1_6;
}
}
@@ -655,6 +704,88 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_sendToAddress(JNIEnv *
return sendTo0(env, fd, (void*) memoryAddress, pos, limit, address, scopeId, port);
}
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_sendToAddresses(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint length, jbyteArray address, jint scopeId, jint port) {
+ struct sockaddr_storage addr;
+
+ if (init_sockaddr(env, address, scopeId, port, &addr) == -1) {
+ return -1;
+ }
+
+ struct msghdr m;
+ m.msg_name = (void*) &addr;
+ m.msg_namelen = (socklen_t) sizeof(struct sockaddr_storage);
+ m.msg_iov = (struct iovec *) memoryAddress;
+ m.msg_iovlen = length;
+
+ ssize_t res;
+ int err;
+ do {
+ res = sendmsg(fd, &m, 0);
+ // keep on writing if it was interrupted
+ } while(res == -1 && ((err = errno) == EINTR));
+
+ if (res < 0) {
+ // network stack saturated... try again later
+ if (err == EAGAIN || err == EWOULDBLOCK) {
+ return 0;
+ }
+ if (err == EBADF) {
+ throwClosedChannelException(env);
+ return -1;
+ }
+ throwIOException(env, exceptionMessage("Error while sendto(...): ", err));
+ return -1;
+ }
+ return (jint) res;
+}
+
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_sendmmsg(JNIEnv * env, jclass clazz, jint fd, jobjectArray packets, jint offset, jint len) {
+ struct mmsghdr msg[len];
+ int i;
+
+ memset(msg, 0, sizeof(msg));
+
+ for (i = 0; i < len; i++) {
+ struct sockaddr_storage addr;
+
+ jobject packet = (*env)->GetObjectArrayElement(env, packets, i + offset);
+ jbyteArray address = (jbyteArray) (*env)->GetObjectField(env, packet, packetAddrFieldId);
+ jint scopeId = (*env)->GetIntField(env, packet, packetScopeIdFieldId);
+ jint port = (*env)->GetIntField(env, packet, packetPortFieldId);
+
+ if (init_sockaddr(env, address, scopeId, port, &addr) == -1) {
+ return -1;
+ }
+
+ msg[i].msg_hdr.msg_name = &addr;
+ msg[i].msg_hdr.msg_namelen = sizeof(addr);
+
+ msg[i].msg_hdr.msg_iov = (struct iovec *) (*env)->GetLongField(env, packet, packetMemoryAddressFieldId);
+ msg[i].msg_hdr.msg_iovlen = (*env)->GetIntField(env, packet, packetCountFieldId);;
+ }
+
+ ssize_t res;
+ int err;
+ do {
+ res = sendmmsg(fd, msg, len, 0);
+ // keep on writing if it was interrupted
+ } while(res == -1 && ((err = errno) == EINTR));
+
+ if (res < 0) {
+ // network stack saturated... try again later
+ if (err == EAGAIN || err == EWOULDBLOCK) {
+ return 0;
+ }
+ if (err == EBADF) {
+ throwClosedChannelException(env);
+ return -1;
+ }
+ throwIOException(env, exceptionMessage("Error while sendmmsg(...): ", err));
+ return -1;
+ }
+ return (jint) res;
+}
+
jobject recvFrom0(JNIEnv * env, jint fd, void* buffer, jint pos, jint limit) {
struct sockaddr_storage addr;
socklen_t addrlen = sizeof(addr);
@@ -1191,3 +1322,16 @@ JNIEXPORT jstring JNICALL Java_io_netty_channel_epoll_Native_kernelVersion(JNIEn
JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_iovMax(JNIEnv *env, jclass clazz) {
return IOV_MAX;
}
+
+JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_uioMaxIov(JNIEnv *env, jclass clazz) {
+ return UIO_MAXIOV;
+}
+
+
+JNIEXPORT jboolean JNICALL Java_io_netty_channel_epoll_Native_isSupportingSendmmsg(JNIEnv *env, jclass clazz) {
+ if (sendmmsg) {
+ return JNI_TRUE;
+ }
+ return JNI_FALSE;
+}
+
diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
index ac0677f97b5..a6e9fb04aae 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
@@ -33,6 +33,11 @@
#define IOV_MAX 1024
#endif /* IOV_MAX */
+// Define UIO_MAXIOV if not found
+#ifndef UIO_MAXIOV
+#define UIO_MAXIOV 1024
+#endif /* UIO_MAXIOV */
+
jint Java_io_netty_channel_epoll_Native_eventFd(JNIEnv * env, jclass clazz);
void Java_io_netty_channel_epoll_Native_eventFdWrite(JNIEnv * env, jclass clazz, jint fd, jlong value);
void Java_io_netty_channel_epoll_Native_eventFdRead(JNIEnv * env, jclass clazz, jint fd);
@@ -47,6 +52,8 @@ jlong Java_io_netty_channel_epoll_Native_writev(JNIEnv * env, jclass clazz, jint
jlong Java_io_netty_channel_epoll_Native_writevAddresses(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint length);
jint Java_io_netty_channel_epoll_Native_sendTo(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
jint Java_io_netty_channel_epoll_Native_sendToAddress(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_sendToAddresses(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint length, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_sendmmsg(JNIEnv * env, jclass clazz, jint fd, jobjectArray packets, jint offset, jint len);
jint Java_io_netty_channel_epoll_Native_read(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
jint Java_io_netty_channel_epoll_Native_readAddress(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
@@ -94,3 +101,5 @@ jint Java_io_netty_channel_epoll_Native_getTcpKeepCnt(JNIEnv *env, jclass clazz,
jstring Java_io_netty_channel_epoll_Native_kernelVersion(JNIEnv *env, jclass clazz);
jint Java_io_netty_channel_epoll_Native_iovMax(JNIEnv *env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_uioMaxIov(JNIEnv *env, jclass clazz);
+jboolean Java_io_netty_channel_epoll_Native_isSupportingSendmmsg(JNIEnv *env, jclass clazz);
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
index 26cc442a73f..75d34150965 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
@@ -16,6 +16,7 @@
package io.netty.channel.epoll;
import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
import io.netty.channel.AddressedEnvelope;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelMetadata;
@@ -28,6 +29,7 @@
import io.netty.channel.socket.DatagramChannel;
import io.netty.channel.socket.DatagramChannelConfig;
import io.netty.channel.socket.DatagramPacket;
+import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
import java.io.IOException;
@@ -264,6 +266,32 @@ protected void doWrite(ChannelOutboundBuffer in) throws Exception {
}
try {
+ // Check if sendmmsg(...) is supported which is only the case for GLIBC 2.14+
+ if (Native.IS_SUPPORTING_SENDMMSG && in.size() > 1) {
+ NativeDatagramPacketArray array = NativeDatagramPacketArray.getInstance(in);
+ int cnt = array.count();
+
+ if (cnt >= 1) {
+ // Try to use gathering writes via sendmmsg(...) syscall.
+ int offset = 0;
+ NativeDatagramPacketArray.NativeDatagramPacket[] packets = array.packets();
+
+ while (cnt > 0) {
+ int send = Native.sendmmsg(fd, packets, offset, cnt);
+ if (send == 0) {
+ // Did not write all messages.
+ setEpollOut();
+ return;
+ }
+ for (int i = 0; i < send; i++) {
+ in.remove();
+ }
+ cnt -= send;
+ offset += send;
+ }
+ continue;
+ }
+ }
boolean done = false;
for (int i = config().getWriteSpinCount() - 1; i >= 0; i--) {
if (doWriteMessage(msg)) {
@@ -288,7 +316,7 @@ protected void doWrite(ChannelOutboundBuffer in) throws Exception {
}
}
- private boolean doWriteMessage(Object msg) throws IOException {
+ private boolean doWriteMessage(Object msg) throws Exception {
final ByteBuf data;
InetSocketAddress remoteAddress;
if (msg instanceof AddressedEnvelope) {
@@ -319,6 +347,13 @@ private boolean doWriteMessage(Object msg) throws IOException {
long memoryAddress = data.memoryAddress();
writtenBytes = Native.sendToAddress(fd, memoryAddress, data.readerIndex(), data.writerIndex(),
remoteAddress.getAddress(), remoteAddress.getPort());
+ } else if (data instanceof CompositeByteBuf) {
+ IovArray array = IovArrayThreadLocal.get((CompositeByteBuf) data);
+ int cnt = array.count();
+ assert cnt != 0;
+
+ writtenBytes = Native.sendToAddresses(fd, array.memoryAddress(0),
+ cnt, remoteAddress.getAddress(), remoteAddress.getPort());
} else {
ByteBuffer nioData = data.internalNioBuffer(data.readerIndex(), data.readableBytes());
writtenBytes = Native.sendTo(fd, nioData, nioData.position(), nioData.limit(),
@@ -344,13 +379,24 @@ protected Object filterOutboundMessage(Object msg) {
if (msg instanceof ByteBuf) {
ByteBuf buf = (ByteBuf) msg;
- if (buf.hasMemoryAddress()) {
- return msg;
+ if (!buf.hasMemoryAddress() && (PlatformDependent.hasUnsafe() || !buf.isDirect())) {
+ if (buf instanceof CompositeByteBuf) {
+ // Special handling of CompositeByteBuf to reduce memory copies if some of the Components
+ // in the CompositeByteBuf are backed by a memoryAddress.
+ CompositeByteBuf comp = (CompositeByteBuf) buf;
+ if (!comp.isDirect() || comp.nioBufferCount() > Native.IOV_MAX) {
+ // more then 1024 buffers for gathering writes so just do a memory copy.
+ buf = newDirectBuffer(buf);
+ assert buf.hasMemoryAddress();
+ }
+ } else {
+ // We can only handle buffers with memory address so we need to copy if a non direct is
+ // passed to write.
+ buf = newDirectBuffer(buf);
+ assert buf.hasMemoryAddress();
+ }
}
-
- // We can only handle direct buffers so we need to copy if a non direct is
- // passed to write.
- return newDirectBuffer(buf);
+ return buf;
}
if (msg instanceof AddressedEnvelope) {
@@ -363,7 +409,14 @@ protected Object filterOutboundMessage(Object msg) {
if (content.hasMemoryAddress()) {
return e;
}
-
+ if (content instanceof CompositeByteBuf) {
+ // Special handling of CompositeByteBuf to reduce memory copies if some of the Components
+ // in the CompositeByteBuf are backed by a memoryAddress.
+ CompositeByteBuf comp = (CompositeByteBuf) content;
+ if (comp.isDirect() && comp.nioBufferCount() <= Native.IOV_MAX) {
+ return e;
+ }
+ }
// We can only handle direct buffers so we need to copy if a non direct is
// passed to write.
return new DefaultAddressedEnvelope<ByteBuf, InetSocketAddress>(
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
index d38bef175e3..975178ac8bc 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
@@ -358,7 +358,7 @@ private boolean doWriteSingle(ChannelOutboundBuffer in) throws Exception {
private boolean doWriteMultiple(ChannelOutboundBuffer in) throws Exception {
if (PlatformDependent.hasUnsafe()) {
// this means we can cast to IovArray and write the IovArray directly.
- IovArray array = IovArray.get(in);
+ IovArray array = IovArrayThreadLocal.get(in);
int cnt = array.count();
if (cnt >= 1) {
// TODO: Handle the case where cnt == 1 specially.
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/IovArray.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/IovArray.java
index e685d9760b4..e489837f868 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/IovArray.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/IovArray.java
@@ -17,9 +17,7 @@
import io.netty.buffer.ByteBuf;
import io.netty.buffer.CompositeByteBuf;
-import io.netty.channel.ChannelOutboundBuffer;
import io.netty.channel.ChannelOutboundBuffer.MessageProcessor;
-import io.netty.util.concurrent.FastThreadLocal;
import io.netty.util.internal.PlatformDependent;
import java.nio.ByteBuffer;
@@ -52,37 +50,30 @@ final class IovArray implements MessageProcessor {
*/
private static final int IOV_SIZE = 2 * ADDRESS_SIZE;
- /** The needed memory to hold up to {@link Native#IOV_MAX} iov entries, where {@link Native#IOV_MAX} signified
+ /**
+ * The needed memory to hold up to {@link Native#IOV_MAX} iov entries, where {@link Native#IOV_MAX} signified
* the maximum number of {@code iovec} structs that can be passed to {@code writev(...)}.
*/
private static final int CAPACITY = Native.IOV_MAX * IOV_SIZE;
- private static final FastThreadLocal<IovArray> ARRAY = new FastThreadLocal<IovArray>() {
- @Override
- protected IovArray initialValue() throws Exception {
- return new IovArray();
- }
-
- @Override
- protected void onRemoval(IovArray value) throws Exception {
- // free the direct memory now
- PlatformDependent.freeMemory(value.memoryAddress);
- }
- };
-
private final long memoryAddress;
private int count;
private long size;
- private IovArray() {
+ IovArray() {
memoryAddress = PlatformDependent.allocateMemory(CAPACITY);
}
+ void clear() {
+ count = 0;
+ size = 0;
+ }
+
/**
* Try to add the given {@link ByteBuf}. Returns {@code true} on success,
* {@code false} otherwise.
*/
- private boolean add(ByteBuf buf) {
+ boolean add(ByteBuf buf) {
if (count == Native.IOV_MAX) {
// No more room!
return false;
@@ -124,7 +115,11 @@ private void add(long addr, int offset, int len) {
size += len;
}
- private boolean add(CompositeByteBuf buf) {
+ /**
+ * Try to add the given {@link CompositeByteBuf}. Returns {@code true} on success,
+ * {@code false} otherwise.
+ */
+ boolean add(CompositeByteBuf buf) {
ByteBuffer[] buffers = buf.nioBuffers();
if (count + buffers.length >= Native.IOV_MAX) {
// No more room!
@@ -196,6 +191,13 @@ long memoryAddress(int offset) {
return memoryAddress + IOV_SIZE * offset;
}
+ /**
+ * Release the {@link IovArray}. Once release further using of it may crash the JVM!
+ */
+ void release() {
+ PlatformDependent.freeMemory(memoryAddress);
+ }
+
@Override
public boolean processMessage(Object msg) throws Exception {
if (msg instanceof ByteBuf) {
@@ -207,15 +209,4 @@ public boolean processMessage(Object msg) throws Exception {
}
return false;
}
-
- /**
- * Returns a {@link IovArray} which is filled with the flushed messages of {@link ChannelOutboundBuffer}.
- */
- static IovArray get(ChannelOutboundBuffer buffer) throws Exception {
- IovArray array = ARRAY.get();
- array.size = 0;
- array.count = 0;
- buffer.forEachFlushedMessage(array);
- return array;
- }
}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/IovArrayThreadLocal.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/IovArrayThreadLocal.java
new file mode 100644
index 00000000000..b7f66dcb42b
--- /dev/null
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/IovArrayThreadLocal.java
@@ -0,0 +1,61 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.buffer.CompositeByteBuf;
+import io.netty.channel.ChannelOutboundBuffer;
+import io.netty.util.concurrent.FastThreadLocal;
+
+/**
+ * Allow to obtain {@link IovArray} instances.
+ */
+final class IovArrayThreadLocal {
+
+ private static final FastThreadLocal<IovArray> ARRAY = new FastThreadLocal<IovArray>() {
+ @Override
+ protected IovArray initialValue() throws Exception {
+ return new IovArray();
+ }
+
+ @Override
+ protected void onRemoval(IovArray value) throws Exception {
+ // free the direct memory now
+ value.release();
+ }
+ };
+
+ /**
+ * Returns a {@link IovArray} which is filled with the flushed messages of {@link ChannelOutboundBuffer}.
+ */
+ static IovArray get(ChannelOutboundBuffer buffer) throws Exception {
+ IovArray array = ARRAY.get();
+ array.clear();
+ buffer.forEachFlushedMessage(array);
+ return array;
+ }
+
+ /**
+ * Returns a {@link IovArray} which is filled with the {@link CompositeByteBuf}.
+ */
+ static IovArray get(CompositeByteBuf buf) throws Exception {
+ IovArray array = ARRAY.get();
+ array.clear();
+ array.add(buf);
+ return array;
+ }
+
+ private IovArrayThreadLocal() { }
+}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index e06f12477b9..5bbd890692b 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -52,6 +52,8 @@ final class Native {
public static final int EPOLLACCEPT = 0x04;
public static final int EPOLLRDHUP = 0x08;
public static final int IOV_MAX = iovMax();
+ public static final int UIO_MAX_IOV = uioMaxIov();
+ public static final boolean IS_SUPPORTING_SENDMMSG = isSupportingSendmmsg();
public static native int eventFd();
public static native void eventFdWrite(int fd, long value);
@@ -118,12 +120,37 @@ public static int sendToAddress(
private static native int sendToAddress(
int fd, long memoryAddress, int pos, int limit, byte[] address, int scopeId, int port) throws IOException;
+ public static int sendToAddresses(
+ int fd, long memoryAddress, int length, InetAddress addr, int port) throws IOException {
+ // just duplicate the toNativeInetAddress code here to minimize object creation as this method is expected
+ // to be called frequently
+ byte[] address;
+ int scopeId;
+ if (addr instanceof Inet6Address) {
+ address = addr.getAddress();
+ scopeId = ((Inet6Address) addr).getScopeId();
+ } else {
+ // convert to ipv4 mapped ipv6 address;
+ scopeId = 0;
+ address = ipv4MappedIpv6Address(addr.getAddress());
+ }
+ return sendToAddresses(fd, memoryAddress, length, address, scopeId, port);
+ }
+
+ private static native int sendToAddresses(
+ int fd, long memoryAddress, int length, byte[] address, int scopeId, int port) throws IOException;
+
public static native EpollDatagramChannel.DatagramSocketAddress recvFrom(
int fd, ByteBuffer buf, int pos, int limit) throws IOException;
public static native EpollDatagramChannel.DatagramSocketAddress recvFromAddress(
int fd, long memoryAddress, int pos, int limit) throws IOException;
+ public static native int sendmmsg(
+ int fd, NativeDatagramPacketArray.NativeDatagramPacket[] msgs, int offset, int len) throws IOException;
+
+ private static native boolean isSupportingSendmmsg();
+
// socket operations
public static int socketStreamFd() {
try {
@@ -148,7 +175,7 @@ public static void bind(int fd, InetAddress addr, int port) throws IOException {
bind(fd, address.address, address.scopeId, port);
}
- private static byte[] ipv4MappedIpv6Address(byte[] ipv4) {
+ static byte[] ipv4MappedIpv6Address(byte[] ipv4) {
byte[] address = new byte[16];
System.arraycopy(IPV4_MAPPED_IPV6_PREFIX, 0, address, 0, IPV4_MAPPED_IPV6_PREFIX.length);
System.arraycopy(ipv4, 0, address, 12, ipv4.length);
@@ -226,6 +253,8 @@ private static class NativeInetAddress {
private static native int iovMax();
+ private static native int uioMaxIov();
+
private Native() {
// utility
}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/NativeDatagramPacketArray.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/NativeDatagramPacketArray.java
new file mode 100644
index 00000000000..2861e51032d
--- /dev/null
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/NativeDatagramPacketArray.java
@@ -0,0 +1,158 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelOutboundBuffer;
+import io.netty.channel.socket.DatagramPacket;
+import io.netty.util.concurrent.FastThreadLocal;
+
+import java.net.Inet6Address;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+
+/**
+ * Support <a href="http://linux.die.net/man/2/sendmmsg">sendmmsg(...)</a> on linux with GLIBC 2.14+
+ */
+final class NativeDatagramPacketArray implements ChannelOutboundBuffer.MessageProcessor {
+
+ private static final FastThreadLocal<NativeDatagramPacketArray> ARRAY =
+ new FastThreadLocal<NativeDatagramPacketArray>() {
+ @Override
+ protected NativeDatagramPacketArray initialValue() throws Exception {
+ return new NativeDatagramPacketArray();
+ }
+
+ @Override
+ protected void onRemoval(NativeDatagramPacketArray value) throws Exception {
+ NativeDatagramPacket[] array = value.packets;
+ // Release all packets
+ for (int i = 0; i < array.length; i++) {
+ array[i].release();
+ }
+ }
+ };
+
+ // Use UIO_MAX_IOV as this is the maximum number we can write with one sendmmsg(...) call.
+ private final NativeDatagramPacket[] packets = new NativeDatagramPacket[Native.UIO_MAX_IOV];
+ private int count;
+
+ private NativeDatagramPacketArray() {
+ for (int i = 0; i < packets.length; i++) {
+ packets[i] = new NativeDatagramPacket();
+ }
+ }
+
+ /**
+ * Try to add the given {@link DatagramPacket}. Returns {@code true} on success,
+ * {@code false} otherwise.
+ */
+ boolean add(DatagramPacket packet) {
+ if (count == packets.length) {
+ return false;
+ }
+ ByteBuf content = packet.content();
+ int len = content.readableBytes();
+ if (len == 0) {
+ return true;
+ }
+ NativeDatagramPacket p = packets[count];
+ InetSocketAddress recipient = packet.recipient();
+ if (!p.init(content, recipient)) {
+ return false;
+ }
+
+ count++;
+ return true;
+ }
+
+ @Override
+ public boolean processMessage(Object msg) throws Exception {
+ return msg instanceof DatagramPacket && add((DatagramPacket) msg);
+ }
+
+ /**
+ * Returns the count
+ */
+ int count() {
+ return count;
+ }
+
+ /**
+ * Returns an array with {@link #count()} {@link NativeDatagramPacket}s filled.
+ */
+ NativeDatagramPacket[] packets() {
+ return packets;
+ }
+
+ /**
+ * Returns a {@link NativeDatagramPacketArray} which is filled with the flushed messages of
+ * {@link ChannelOutboundBuffer}.
+ */
+ static NativeDatagramPacketArray getInstance(ChannelOutboundBuffer buffer) throws Exception {
+ NativeDatagramPacketArray array = ARRAY.get();
+ array.count = 0;
+ buffer.forEachFlushedMessage(array);
+ return array;
+ }
+
+ /**
+ * Used to pass needed data to JNI.
+ */
+ @SuppressWarnings("unused")
+ static final class NativeDatagramPacket {
+ // Each NativeDatagramPackets holds a IovArray which is used for gathering writes.
+ // This is ok as NativeDatagramPacketArray is always obtained via a FastThreadLocal and
+ // so the memory needed is quite small anyway.
+ private final IovArray array = new IovArray();
+
+ // This is the actual struct iovec*
+ private long memoryAddress;
+ private int count;
+
+ private byte[] addr;
+ private int scopeId;
+ private int port;
+
+ private void release() {
+ array.release();
+ }
+
+ /**
+ * Init this instance and return {@code true} if the init was successful.
+ */
+ private boolean init(ByteBuf buf, InetSocketAddress recipient) {
+ array.clear();
+ if (!array.add(buf)) {
+ return false;
+ }
+ // always start from offset 0
+ memoryAddress = array.memoryAddress(0);
+ count = array.count();
+
+ InetAddress address = recipient.getAddress();
+ if (address instanceof Inet6Address) {
+ addr = address.getAddress();
+ scopeId = ((Inet6Address) address).getScopeId();
+ } else {
+ addr = Native.ipv4MappedIpv6Address(address.getAddress());
+ scopeId = 0;
+ }
+ port = recipient.getPort();
+ return true;
+ }
+ }
+}
| diff --git a/testsuite/src/test/java/io/netty/testsuite/transport/socket/DatagramUnicastTest.java b/testsuite/src/test/java/io/netty/testsuite/transport/socket/DatagramUnicastTest.java
index 0ab73a96136..07dd9ded4ca 100644
--- a/testsuite/src/test/java/io/netty/testsuite/transport/socket/DatagramUnicastTest.java
+++ b/testsuite/src/test/java/io/netty/testsuite/transport/socket/DatagramUnicastTest.java
@@ -16,6 +16,8 @@
package io.netty.testsuite.transport.socket;
import io.netty.bootstrap.Bootstrap;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.CompositeByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
@@ -32,36 +34,74 @@
public class DatagramUnicastTest extends AbstractDatagramTest {
@Test
- public void testSimpleSend() throws Throwable {
+ public void testSimpleSendDirectByteBuf() throws Throwable {
run();
}
- public void testSimpleSend(Bootstrap sb, Bootstrap cb) throws Throwable {
- final CountDownLatch latch = new CountDownLatch(1);
+ public void testSimpleSendDirectByteBuf(Bootstrap sb, Bootstrap cb) throws Throwable {
+ testSimpleSend0(sb, cb, Unpooled.directBuffer(), true, 1);
+ testSimpleSend0(sb, cb, Unpooled.directBuffer(), true, 4);
+ }
- sb.handler(new SimpleChannelInboundHandler<DatagramPacket>() {
- @Override
- public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
- assertEquals(1, msg.content().readInt());
- latch.countDown();
- }
- });
+ @Test
+ public void testSimpleSendHeapByteBuf() throws Throwable {
+ run();
+ }
- cb.handler(new SimpleChannelInboundHandler<Object>() {
- @Override
- public void channelRead0(ChannelHandlerContext ctx, Object msgs) throws Exception {
- // Nothing will be sent.
- }
- });
+ public void testSimpleSendHeapByteBuf(Bootstrap sb, Bootstrap cb) throws Throwable {
+ testSimpleSend0(sb, cb, Unpooled.directBuffer(), true, 1);
+ testSimpleSend0(sb, cb, Unpooled.directBuffer(), true, 4);
+ }
- Channel sc = sb.bind().sync().channel();
- Channel cc = cb.bind().sync().channel();
+ @Test
+ public void testSimpleSendCompositeDirectByteBuf() throws Throwable {
+ run();
+ }
- cc.writeAndFlush(new DatagramPacket(Unpooled.copyInt(1), addr)).sync();
- assertTrue(latch.await(10, TimeUnit.SECONDS));
+ public void testSimpleSendCompositeDirectByteBuf(Bootstrap sb, Bootstrap cb) throws Throwable {
+ CompositeByteBuf buf = Unpooled.compositeBuffer();
+ buf.addComponent(Unpooled.directBuffer(2, 2));
+ buf.addComponent(Unpooled.directBuffer(2, 2));
+ testSimpleSend0(sb, cb, buf, true, 1);
- sc.close().sync();
- cc.close().sync();
+ CompositeByteBuf buf2 = Unpooled.compositeBuffer();
+ buf2.addComponent(Unpooled.directBuffer(2, 2));
+ buf2.addComponent(Unpooled.directBuffer(2, 2));
+ testSimpleSend0(sb, cb, buf2, true, 4);
+ }
+
+ @Test
+ public void testSimpleSendCompositeHeapByteBuf() throws Throwable {
+ run();
+ }
+
+ public void testSimpleSendCompositeHeapByteBuf(Bootstrap sb, Bootstrap cb) throws Throwable {
+ CompositeByteBuf buf = Unpooled.compositeBuffer();
+ buf.addComponent(Unpooled.buffer(2, 2));
+ buf.addComponent(Unpooled.buffer(2, 2));
+ testSimpleSend0(sb, cb, buf, true, 1);
+
+ CompositeByteBuf buf2 = Unpooled.compositeBuffer();
+ buf2.addComponent(Unpooled.buffer(2, 2));
+ buf2.addComponent(Unpooled.buffer(2, 2));
+ testSimpleSend0(sb, cb, buf2, true, 4);
+ }
+
+ @Test
+ public void testSimpleSendCompositeMixedByteBuf() throws Throwable {
+ run();
+ }
+
+ public void testSimpleSendCompositeMixedByteBuf(Bootstrap sb, Bootstrap cb) throws Throwable {
+ CompositeByteBuf buf = Unpooled.compositeBuffer();
+ buf.addComponent(Unpooled.directBuffer(2, 2));
+ buf.addComponent(Unpooled.buffer(2, 2));
+ testSimpleSend0(sb, cb, buf, true, 1);
+
+ CompositeByteBuf buf2 = Unpooled.compositeBuffer();
+ buf2.addComponent(Unpooled.directBuffer(2, 2));
+ buf2.addComponent(Unpooled.buffer(2, 2));
+ testSimpleSend0(sb, cb, buf2, true, 4);
}
@Test
@@ -69,9 +109,16 @@ public void testSimpleSendWithoutBind() throws Throwable {
run();
}
- @SuppressWarnings("deprecation")
public void testSimpleSendWithoutBind(Bootstrap sb, Bootstrap cb) throws Throwable {
- final CountDownLatch latch = new CountDownLatch(1);
+ testSimpleSend0(sb, cb, Unpooled.directBuffer(), false, 1);
+ testSimpleSend0(sb, cb, Unpooled.directBuffer(), false, 4);
+ }
+
+ @SuppressWarnings("deprecation")
+ private void testSimpleSend0(Bootstrap sb, Bootstrap cb, ByteBuf buf, boolean bindClient, int count)
+ throws Throwable {
+ buf.writeInt(1);
+ final CountDownLatch latch = new CountDownLatch(count);
sb.handler(new SimpleChannelInboundHandler<DatagramPacket>() {
@Override
@@ -87,12 +134,22 @@ public void channelRead0(ChannelHandlerContext ctx, Object msgs) throws Exceptio
// Nothing will be sent.
}
});
- cb.option(ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION, true);
Channel sc = sb.bind().sync().channel();
- Channel cc = cb.register().sync().channel();
-
- cc.writeAndFlush(new DatagramPacket(Unpooled.copyInt(1), addr)).sync();
+ Channel cc;
+ if (bindClient) {
+ cc = cb.bind().sync().channel();
+ } else {
+ cb.option(ChannelOption.DATAGRAM_CHANNEL_ACTIVE_ON_REGISTRATION, true);
+ cc = cb.register().sync().channel();
+ }
+
+ for (int i = 0; i < count; i++) {
+ cc.write(new DatagramPacket(buf.retain().duplicate(), addr));
+ }
+ // release as we used buf.retain() before
+ buf.release();
+ cc.flush();
assertTrue(latch.await(10, TimeUnit.SECONDS));
sc.close().sync();
| train | train | 2014-09-09T06:36:40 | 2014-08-01T00:29:56Z | normanmaurer | val |
netty/netty/2862_2863 | netty/netty | netty/netty/2862 | netty/netty/2863 | [
"timestamp(timedelta=21.0, similarity=0.9999999999999999)"
] | fc45473e7975f18fc27dbbd147ab67828e72ad2d | 7b19ba508cc0f84702c0787409b7284f73290247 | [
"Addressed with aba28d4fef2e3bc7b5afe41662ad0672828d548d\n"
] | [
"replace `i * 2` with `i << 1`. Bit-shifting for the win ;)\n",
"Good call! Done.\n",
"This can be collapsed to else if (...)\n",
"Done.\n",
"Kill it or fill it ;)\n",
"I was sneaky on this one. Look at the below lines for the `<ul>...</ul>`. Is OK?\n",
"Ah sorry missed this.. Sounds good :)\n",
"Do... | 2014-09-04T10:59:21Z | [
"feature"
] | IPv6 address to string rfc5952 | The java docs/spec for [Inet6Address.getHostName](http://docs.oracle.com/javase/7/docs/api/java/net/InetAddress.html#getHostName%28%29) does not specify the RFC's recommended string representation (http://tools.ietf.org/html/rfc5952#section-4). Therefore implementations of this method may provide varying results. It would be nice to have a io.netty.util.NetUtil static method to translate Inet6Address object to string according to the RFC's recommendation.
Guava provides this functionality in [InetAddress.toAddrString](https://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/net/InetAddresses.java?r=e17759dcf1575e634058443bce106b34e15215cd)
| [
"common/src/main/java/io/netty/util/NetUtil.java"
] | [
"common/src/main/java/io/netty/util/NetUtil.java"
] | [
"common/src/test/java/io/netty/util/NetUtilTest.java",
"microbench/src/test/java/io/netty/microbenchmark/common/NetUtilBenchmark.java"
] | diff --git a/common/src/main/java/io/netty/util/NetUtil.java b/common/src/main/java/io/netty/util/NetUtil.java
index 307c5a3196b..3a1f1057e0c 100644
--- a/common/src/main/java/io/netty/util/NetUtil.java
+++ b/common/src/main/java/io/netty/util/NetUtil.java
@@ -27,6 +27,7 @@
import java.net.InetAddress;
import java.net.NetworkInterface;
import java.net.SocketException;
+import java.net.UnknownHostException;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.List;
@@ -68,6 +69,51 @@ public final class NetUtil {
*/
public static final int SOMAXCONN;
+ /**
+ * This defines how many words (represented as ints) are needed to represent an IPv6 address
+ */
+ private static final int IPV6_WORD_COUNT = 8;
+
+ /**
+ * The maximum number of characters for an IPV6 string with no scope
+ */
+ private static final int IPV6_MAX_CHAR_COUNT = 39;
+
+ /**
+ * Number of bytes needed to represent and IPV6 value
+ */
+ private static final int IPV6_BYTE_COUNT = 16;
+
+ /**
+ * Maximum amount of value adding characters in between IPV6 separators
+ */
+ private static final int IPV6_MAX_CHAR_BETWEEN_SEPARATOR = 4;
+
+ /**
+ * Minimum number of separators that must be present in an IPv6 string
+ */
+ private static final int IPV6_MIN_SEPARATORS = 2;
+
+ /**
+ * Maximum number of separators that must be present in an IPv6 string
+ */
+ private static final int IPV6_MAX_SEPARATORS = 8;
+
+ /**
+ * Number of bytes needed to represent and IPV4 value
+ */
+ private static final int IPV4_BYTE_COUNT = 4;
+
+ /**
+ * Maximum amount of value adding characters in between IPV4 separators
+ */
+ private static final int IPV4_MAX_CHAR_BETWEEN_SEPARATOR = 3;
+
+ /**
+ * Number of separators that must be present in an IPv4 string
+ */
+ private static final int IPV4_SEPARATORS = 3;
+
/**
* The logger being used by this class
*/
@@ -223,8 +269,8 @@ public static byte[] createByteArrayFromIpAddressString(String ipAddressString)
StringTokenizer tokenizer = new StringTokenizer(ipAddressString, ".");
String token;
int tempInt;
- byte[] byteAddress = new byte[4];
- for (int i = 0; i < 4; i ++) {
+ byte[] byteAddress = new byte[IPV4_BYTE_COUNT];
+ for (int i = 0; i < IPV4_BYTE_COUNT; i ++) {
token = tokenizer.nextToken();
tempInt = Integer.parseInt(token);
byteAddress[i] = (byte) tempInt;
@@ -294,11 +340,11 @@ public static byte[] createByteArrayFromIpAddressString(String ipAddressString)
}
}
- byte[] ipByteArray = new byte[16];
+ byte[] ipByteArray = new byte[IPV6_BYTE_COUNT];
// Finally convert these strings to bytes...
for (int i = 0; i < hexStrings.size(); i ++) {
- convertToBytes(hexStrings.get(i), ipByteArray, i * 2);
+ convertToBytes(hexStrings.get(i), ipByteArray, i << 1);
}
// Now if there are any decimal values, we know where they go...
@@ -553,10 +599,14 @@ public static boolean isValidIp4Word(String word) {
return Integer.parseInt(word) <= 255;
}
- static boolean isValidHexChar(char c) {
+ private static boolean isValidHexChar(char c) {
return c >= '0' && c <= '9' || c >= 'A' && c <= 'F' || c >= 'a' && c <= 'f';
}
+ private static boolean isValidNumericChar(char c) {
+ return c >= '0' && c <= '9';
+ }
+
/**
* Takes a string and parses it to see if it is a valid IPV4 address.
*
@@ -605,6 +655,368 @@ public static boolean isValidIpV4Address(String value) {
return periods == 3;
}
+ /**
+ * Returns the {@link Inet6Address} representation of a {@link CharSequence} IP address.
+ * <p>
+ * This method will treat all IPv4 type addresses as "IPv4 mapped" (see {@link #getByName(CharSequence, boolean)})
+ * @param ip {@link CharSequence} IP address to be converted to a {@link Inet6Address}
+ * @return {@link Inet6Address} representation of the {@code ip} or {@code null} if not a valid IP address.
+ */
+ public static Inet6Address getByName(CharSequence ip) {
+ return getByName(ip, true);
+ }
+
+ /**
+ * Returns the {@link Inet6Address} representation of a {@link CharSequence} IP address.
+ * <p>
+ * The {@code ipv4Mapped} parameter specifies how IPv4 addresses should be treated.
+ * "IPv4 mapped" format as
+ * defined in <a href="http://tools.ietf.org/html/rfc4291#section-2.5.5">rfc 4291 section 2</a> is supported.
+ * @param ip {@link CharSequence} IP address to be converted to a {@link Inet6Address}
+ * @param ipv4Mapped
+ * <ul>
+ * <li>{@code true} To allow IPv4 mapped inputs to be translated into {@link Inet6Address}</li>
+ * <li>{@code false} Don't turn IPv4 addressed to mapped addresses</li>
+ * </ul>
+ * @return {@link Inet6Address} representation of the {@code ip} or {@code null} if not a valid IP address.
+ */
+ public static Inet6Address getByName(CharSequence ip, boolean ipv4Mapped) {
+ final byte[] bytes = new byte[IPV6_BYTE_COUNT];
+ final int ipLength = ip.length();
+ int compressBegin = 0;
+ int compressLength = 0;
+ int currentIndex = 0;
+ int value = 0;
+ int begin = -1;
+ int i = 0;
+ int ipv6Seperators = 0;
+ int ipv4Seperators = 0;
+ int tmp = 0;
+ boolean needsShift = false;
+ for (; i < ipLength; ++i) {
+ final char c = ip.charAt(i);
+ switch (c) {
+ case ':':
+ ++ipv6Seperators;
+ if (i - begin > IPV6_MAX_CHAR_BETWEEN_SEPARATOR ||
+ ipv4Seperators > 0 || ipv6Seperators > IPV6_MAX_SEPARATORS ||
+ currentIndex + 1 >= bytes.length) {
+ return null;
+ }
+ value <<= (IPV6_MAX_CHAR_BETWEEN_SEPARATOR - (i - begin)) << 2;
+
+ if (compressLength > 0) {
+ compressLength -= 2;
+ }
+
+ // The value integer holds at most 4 bytes from right (most significant) to left (least significant).
+ // The following bit shifting is used to extract and re-order the individual bytes to achieve a
+ // left (most significant) to right (least significant) ordering.
+ bytes[currentIndex++] = (byte) (((value & 0xf) << 4) | ((value >> 4) & 0xf));
+ bytes[currentIndex++] = (byte) ((((value >> 8) & 0xf) << 4) | ((value >> 12) & 0xf));
+ tmp = i + 1;
+ if (tmp < ipLength && ip.charAt(tmp) == ':') {
+ ++tmp;
+ if (compressBegin != 0 || (tmp < ipLength && ip.charAt(tmp) == ':')) {
+ return null;
+ }
+ ++ipv6Seperators;
+ needsShift = ipv6Seperators == 2 && value == 0;
+ compressBegin = currentIndex;
+ compressLength = bytes.length - compressBegin - 2;
+ ++i;
+ }
+ value = 0;
+ begin = -1;
+ break;
+ case '.':
+ ++ipv4Seperators;
+ if (i - begin > IPV4_MAX_CHAR_BETWEEN_SEPARATOR
+ || ipv4Seperators > IPV4_SEPARATORS
+ || (ipv6Seperators > 0 && (currentIndex + compressLength < 12))
+ || i + 1 >= ipLength
+ || currentIndex >= bytes.length
+ || begin < 0
+ || (begin == 0 && (i == 3 && (!isValidNumericChar(ip.charAt(2)) ||
+ !isValidNumericChar(ip.charAt(1)) ||
+ !isValidNumericChar(ip.charAt(0))) ||
+ i == 2 && (!isValidNumericChar(ip.charAt(1)) ||
+ !isValidNumericChar(ip.charAt(0))) ||
+ i == 1 && !isValidNumericChar(ip.charAt(0))))) {
+ return null;
+ }
+ value <<= (IPV4_MAX_CHAR_BETWEEN_SEPARATOR - (i - begin)) << 2;
+
+ // The value integer holds at most 3 bytes from right (most significant) to left (least significant).
+ // The following bit shifting is to restructure the bytes to be left (most significant) to
+ // right (least significant) while also accounting for each IPv4 digit is base 10.
+ begin = (value & 0xf) * 100 + ((value >> 4) & 0xf) * 10 + ((value >> 8) & 0xf);
+ if (begin < 0 || begin > 255) {
+ return null;
+ }
+ bytes[currentIndex++] = (byte) begin;
+ value = 0;
+ begin = -1;
+ break;
+ default:
+ if (!isValidHexChar(c) || (ipv4Seperators > 0 && !isValidNumericChar(c))) {
+ return null;
+ }
+ if (begin < 0) {
+ begin = i;
+ } else if (i - begin > IPV6_MAX_CHAR_BETWEEN_SEPARATOR) {
+ return null;
+ }
+ // The value is treated as a sort of array of numbers because we are dealing with
+ // at most 4 consecutive bytes we can use bit shifting to accomplish this.
+ // The most significant byte will be encountered first, and reside in the right most
+ // position of the following integer
+ value += getIntValue(c) << ((i - begin) << 2);
+ break;
+ }
+ }
+
+ final boolean isCompressed = compressBegin > 0;
+ // Finish up last set of data that was accumulated in the loop (or before the loop)
+ if (ipv4Seperators > 0) {
+ if (begin > 0 && i - begin > IPV4_MAX_CHAR_BETWEEN_SEPARATOR ||
+ ipv4Seperators != IPV4_SEPARATORS ||
+ currentIndex >= bytes.length) {
+ return null;
+ }
+ if (ipv6Seperators == 0) {
+ compressLength = 12;
+ } else if (ipv6Seperators >= IPV6_MIN_SEPARATORS &&
+ ip.charAt(ipLength - 1) != ':' &&
+ (!isCompressed && (ipv6Seperators == 6 && ip.charAt(0) != ':') ||
+ isCompressed && (ipv6Seperators + 1 < IPV6_MAX_SEPARATORS &&
+ (ip.charAt(0) != ':' || compressBegin <= 2)))) {
+ compressLength -= 2;
+ } else {
+ return null;
+ }
+ value <<= (IPV4_MAX_CHAR_BETWEEN_SEPARATOR - (i - begin)) << 2;
+
+ // The value integer holds at most 3 bytes from right (most significant) to left (least significant).
+ // The following bit shifting is to restructure the bytes to be left (most significant) to
+ // right (least significant) while also accounting for each IPv4 digit is base 10.
+ begin = (value & 0xf) * 100 + ((value >> 4) & 0xf) * 10 + ((value >> 8) & 0xf);
+ if (begin < 0 || begin > 255) {
+ return null;
+ }
+ bytes[currentIndex++] = (byte) begin;
+ } else {
+ tmp = ipLength - 1;
+ if (begin > 0 && i - begin > IPV6_MAX_CHAR_BETWEEN_SEPARATOR ||
+ ipv6Seperators < IPV6_MIN_SEPARATORS ||
+ !isCompressed && (ipv6Seperators + 1 != IPV6_MAX_SEPARATORS ||
+ ip.charAt(0) == ':' || ip.charAt(tmp) == ':') ||
+ isCompressed && (ipv6Seperators > IPV6_MAX_SEPARATORS ||
+ (ipv6Seperators == IPV6_MAX_SEPARATORS &&
+ (compressBegin <= 2 && ip.charAt(0) != ':' ||
+ compressBegin >= 14 && ip.charAt(tmp) != ':'))) ||
+ currentIndex + 1 >= bytes.length) {
+ return null;
+ }
+ if (begin >= 0 && i - begin <= IPV6_MAX_CHAR_BETWEEN_SEPARATOR) {
+ value <<= (IPV6_MAX_CHAR_BETWEEN_SEPARATOR - (i - begin)) << 2;
+ }
+ // The value integer holds at most 4 bytes from right (most significant) to left (least significant).
+ // The following bit shifting is used to extract and re-order the individual bytes to achieve a
+ // left (most significant) to right (least significant) ordering.
+ bytes[currentIndex++] = (byte) (((value & 0xf) << 4) | ((value >> 4) & 0xf));
+ bytes[currentIndex++] = (byte) ((((value >> 8) & 0xf) << 4) | ((value >> 12) & 0xf));
+ }
+
+ i = currentIndex + compressLength;
+ if (needsShift || i >= bytes.length) {
+ // Right shift array
+ if (i >= bytes.length) {
+ ++compressBegin;
+ }
+ for (i = currentIndex; i < bytes.length; ++i) {
+ for (begin = bytes.length - 1; begin >= compressBegin; --begin) {
+ bytes[begin] = bytes[begin - 1];
+ }
+ bytes[begin] = 0;
+ ++compressBegin;
+ }
+ } else {
+ // Selectively move elements
+ for (i = 0; i < compressLength; ++i) {
+ begin = i + compressBegin;
+ currentIndex = begin + compressLength;
+ if (currentIndex < bytes.length) {
+ bytes[currentIndex] = bytes[begin];
+ bytes[begin] = 0;
+ } else {
+ break;
+ }
+ }
+ }
+
+ if (ipv4Mapped && ipv4Seperators > 0 &&
+ bytes[0] == 0 && bytes[1] == 0 && bytes[2] == 0 && bytes[3] == 0 && bytes[4] == 0 &&
+ bytes[5] == 0 && bytes[6] == 0 && bytes[7] == 0 && bytes[8] == 0 && bytes[9] == 0) {
+ bytes[10] = bytes[11] = (byte) 0xff;
+ }
+
+ try {
+ return Inet6Address.getByAddress(null, bytes, -1);
+ } catch (UnknownHostException e) {
+ throw new RuntimeException(e); // Should never happen
+ }
+ }
+
+ /**
+ * Returns the {@link String} representation of an {@link InetAddress}.
+ * <ul>
+ * <li>Inet4Address results are identical to {@link InetAddress#getHostAddress()}</li>
+ * <li>Inet6Address results adhere to
+ * <a href="http://tools.ietf.org/html/rfc5952#section-4">rfc 5952 section 4</a></li>
+ * </ul>
+ * <p>
+ * The output does not include Scope ID.
+ * @param ip {@link InetAddress} to be converted to an address string
+ * @return {@code String} containing the text-formatted IP address
+ */
+ public static String toAddressString(InetAddress ip) {
+ return toAddressString(ip, false);
+ }
+
+ /**
+ * Returns the {@link String} representation of an {@link InetAddress}.
+ * <ul>
+ * <li>Inet4Address results are identical to {@link InetAddress#getHostAddress()}</li>
+ * <li>Inet6Address results adhere to
+ * <a href="http://tools.ietf.org/html/rfc5952#section-4">rfc 5952 section 4</a> if
+ * {@code ipv4Mapped} is false. If {@code ipv4Mapped} is true then "IPv4 mapped" format
+ * from <a href="http://tools.ietf.org/html/rfc4291#section-2.5.5">rfc 4291 section 2</a> will be supported.
+ * The compressed result will always obey the compression rules defined in
+ * <a href="http://tools.ietf.org/html/rfc5952#section-4">rfc 5952 section 4</a></li>
+ * </ul>
+ * <p>
+ * The output does not include Scope ID.
+ * @param ip {@link InetAddress} to be converted to an address string
+ * @param ipv4Mapped
+ * <ul>
+ * <li>{@code true} to stray from strict rfc 5952 and support the "IPv4 mapped" format
+ * defined in <a href="http://tools.ietf.org/html/rfc4291#section-2.5.5">rfc 4291 section 2</a> while still
+ * following the updated guidelines in
+ * <a href="http://tools.ietf.org/html/rfc5952#section-4">rfc 5952 section 4</a></li>
+ * <li>{@code false} to strictly follow rfc 5952</li>
+ * </ul>
+ * @return {@code String} containing the text-formatted IP address
+ */
+ public static String toAddressString(InetAddress ip, boolean ipv4Mapped) {
+ if (ip instanceof Inet4Address) {
+ return ip.getHostAddress();
+ }
+ if (!(ip instanceof Inet6Address)) {
+ throw new IllegalArgumentException("Unhandled type: " + ip.getClass());
+ }
+
+ final byte[] bytes = ip.getAddress();
+ final int[] words = new int[IPV6_WORD_COUNT];
+ int i;
+ for (i = 0; i < words.length; ++i) {
+ words[i] = ((bytes[i << 1] & 0xff) << 8) | (bytes[(i << 1) + 1] & 0xff);
+ }
+
+ // Find longest run of 0s, tie goes to first found instance
+ int currentStart = -1;
+ int currentLength = 0;
+ int shortestStart = -1;
+ int shortestLength = 0;
+ for (i = 0; i < words.length; ++i) {
+ if (words[i] == 0) {
+ if (currentStart < 0) {
+ currentStart = i;
+ }
+ } else if (currentStart >= 0) {
+ currentLength = i - currentStart;
+ if (currentLength > shortestLength) {
+ shortestStart = currentStart;
+ shortestLength = currentLength;
+ }
+ currentStart = -1;
+ }
+ }
+ // If the array ends on a streak of zeros, make sure we account for it
+ if (currentStart >= 0) {
+ currentLength = i - currentStart;
+ if (currentLength > shortestLength) {
+ shortestStart = currentStart;
+ shortestLength = currentLength;
+ }
+ }
+ // Ignore the longest streak if it is only 1 long
+ if (shortestLength == 1) {
+ shortestLength = 0;
+ shortestStart = -1;
+ }
+
+ // Translate to string taking into account longest consecutive 0s
+ final int shortestEnd = shortestStart + shortestLength;
+ final StringBuilder b = new StringBuilder(IPV6_MAX_CHAR_COUNT);
+ if (shortestEnd < 0) { // Optimization when there is no compressing needed
+ b.append(Integer.toHexString(words[0]));
+ for (i = 1; i < words.length; ++i) {
+ b.append(':');
+ b.append(Integer.toHexString(words[i]));
+ }
+ } else { // General case that can handle compressing (and not compressing)
+ // Loop unroll the first index (so we don't constantly check i==0 cases in loop)
+ final boolean isIpv4Mapped;
+ if (inRangeEndExclusive(0, shortestStart, shortestEnd)) {
+ b.append("::");
+ isIpv4Mapped = ipv4Mapped && (shortestEnd == 5 && words[5] == 0xffff);
+ } else {
+ b.append(Integer.toHexString(words[0]));
+ isIpv4Mapped = false;
+ }
+ for (i = 1; i < words.length; ++i) {
+ if (!inRangeEndExclusive(i, shortestStart, shortestEnd)) {
+ if (!inRangeEndExclusive(i - 1, shortestStart, shortestEnd)) {
+ // If the last index was not part of the shortened sequence
+ if (!isIpv4Mapped || i == 6) {
+ b.append(':');
+ } else {
+ b.append('.');
+ }
+ }
+ if (isIpv4Mapped && i > 5) {
+ b.append(words[i] >> 8);
+ b.append('.');
+ b.append(words[i] & 0xff);
+ } else {
+ b.append(Integer.toHexString(words[i]));
+ }
+ } else if (!inRangeEndExclusive(i - 1, shortestStart, shortestEnd)) {
+ // If we are in the shortened sequence and the last index was not
+ b.append("::");
+ }
+ }
+ }
+
+ return b.toString();
+ }
+
+ /**
+ * Does a range check on {@code value} if is within {@code start} (inclusive) and {@code end} (exclusive).
+ * @param value The value to checked if is within {@code start} (inclusive) and {@code end} (exclusive)
+ * @param start The start of the range (inclusive)
+ * @param end The end of the range (exclusive)
+ * @return
+ * <ul>
+ * <li>{@code true} if {@code value} if is within {@code start} (inclusive) and {@code end} (exclusive)</li>
+ * <li>{@code false} otherwise</li>
+ * </ul>
+ */
+ private static boolean inRangeEndExclusive(int value, int start, int end) {
+ return value >= start && value < end;
+ }
+
/**
* A constructor to stop this class being constructed.
*/
| diff --git a/common/src/test/java/io/netty/util/NetUtilTest.java b/common/src/test/java/io/netty/util/NetUtilTest.java
index fa36514a8cc..1d9867bf0df 100644
--- a/common/src/test/java/io/netty/util/NetUtilTest.java
+++ b/common/src/test/java/io/netty/util/NetUtilTest.java
@@ -15,13 +15,20 @@
*/
package io.netty.util;
-import org.junit.Test;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertNull;
+import java.net.InetAddress;
+import java.net.UnknownHostException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
-import static org.junit.Assert.*;
+import org.junit.Test;
public class NetUtilTest {
private static final Map<String, byte[]> validIpV4Hosts = new HashMap<String, byte[]>() {
@@ -50,6 +57,18 @@ public class NetUtilTest {
put("1.256.3.4", null);
put("256.0.0.1", null);
put("1.1.1.1.1", null);
+ put("x.255.255.255", null);
+ put("0.1:0.0", null);
+ put("0.1.0.0:", null);
+ put("127.0.0.", null);
+ put("1.2..4", null);
+ put("192.0.1", null);
+ put("192.0.1.1.1", null);
+ put("192.0.1.a", null);
+ put("19a.0.1.1", null);
+ put("a.0.1.1", null);
+ put(".0.1.1", null);
+ put("...", null);
}
};
private static final Map<String, byte[]> validIpV6Hosts = new HashMap<String, byte[]>() {
@@ -177,6 +196,16 @@ public class NetUtilTest {
put("0:1:2:3:4:5:6:x", null);
// Test method with preferred style, adjacent :
put("0:1:2:3:4:5:6::7", null);
+ // Too many : separators trailing
+ put("0:1:2:3:4:5:6:7::", null);
+ // Too many : separators leading
+ put("::0:1:2:3:4:5:6:7", null);
+ // Too many : separators trailing
+ put("1:2:3:4:5:6:7:", null);
+ // Too many : separators leading
+ put(":1:2:3:4:5:6:7", null);
+ // Too many : separators leading 0
+ put("0::1:2:3:4:5:6:7", null);
// Test method with preferred style, too many digits.
put("0:1:2:3:4:5:6:789abcdef", null);
// Test method with compressed style, bad digits.
@@ -207,6 +236,14 @@ public class NetUtilTest {
put("0:0:0:0:0:0:10.0.1", null);
// Test method with ipv4 style, adjacent .
put("0:0:0:0:0:0:10..0.0.1", null);
+ // Test method with ipv4 style, leading .
+ put("0:0:0:0:0:0:.0.0.1", null);
+ // Test method with ipv4 style, leading .
+ put("0:0:0:0:0:0:.10.0.0.1", null);
+ // Test method with ipv4 style, trailing .
+ put("0:0:0:0:0:0:10.0.0.", null);
+ // Test method with ipv4 style, trailing .
+ put("0:0:0:0:0:0:10.0.0.1.", null);
// Test method with compressed ipv4 style, bad ipv6 digits.
put("::fffx:192.168.0.1", null);
// Test method with compressed ipv4 style, bad ipv4 digits.
@@ -239,12 +276,218 @@ public class NetUtilTest {
put("0:0:0:0:0:0:0:10.0.0.1", null);
// Test method, not enough :
put("0:0:0:0:0:10.0.0.1", null);
+ // Test method, out of order trailing :
+ put("0:0:0:0:0:10.0.0.1:", null);
+ // Test method, out of order leading :
+ put(":0:0:0:0:0:10.0.0.1", null);
+ // Test method, out of order leading :
+ put("0:0:0:0::10.0.0.1:", null);
+ // Test method, out of order trailing :
+ put(":0:0:0:0::10.0.0.1", null);
// Test method, too many .
put("0:0:0:0:0:0:10.0.0.0.1", null);
// Test method, not enough .
put("0:0:0:0:0:0:10.0.1", null);
// Test method, adjacent .
put("0:0:0:0:0:0:10.0.0..1", null);
+ // Double compression symbol
+ put("::0::", null);
+ // Empty contents
+ put("", null);
+ // Trailing : (max number of : = 8)
+ put("2001:0:4136:e378:8000:63bf:3fff:fdd2:", null);
+ // Leading : (max number of : = 8)
+ put(":aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222", null);
+ // Invalid character
+ put("1234:2345:3456:4567:5678:6789::X890", null);
+ // Trailing . in IPv4
+ put("::ffff:255.255.255.255.", null);
+ // To many characters in IPv4
+ put("::ffff:0.0.1111.0", null);
+ // Test method, adjacent .
+ put("::ffff:0.0..0", null);
+ // Not enough IPv4 entries trailing .
+ put("::ffff:127.0.0.", null);
+ // Not enough IPv4 entries no trailing .
+ put("::ffff:1.2.4", null);
+ // Extra IPv4 entry
+ put("::ffff:192.168.0.1.255", null);
+ // Not enough IPv6 content
+ put(":ffff:192.168.0.1.255", null);
+ // Intermixed IPv4 and IPv6 symbols
+ put("::ffff:255.255:255.255.", null);
+ }
+ };
+ private static final Map<byte[], String> ipv6ToAddressStrings = new HashMap<byte[], String>() {
+ private static final long serialVersionUID = 2999763170377573184L;
+ {
+ // From the RFC 5952 http://tools.ietf.org/html/rfc5952#section-4
+ put(new byte[]{
+ 32, 1, 13, -72,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 1},
+ "2001:db8::1");
+ put(new byte[]{
+ 32, 1, 13, -72,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 2, 0, 1},
+ "2001:db8::2:1");
+ put(new byte[]{
+ 32, 1, 13, -72,
+ 0, 0, 0, 1,
+ 0, 1, 0, 1,
+ 0, 1, 0, 1},
+ "2001:db8:0:1:1:1:1:1");
+
+ // Other examples
+ put(new byte[]{
+ 32, 1, 13, -72,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 2, 0, 1},
+ "2001:db8::2:1");
+ put(new byte[]{
+ 32, 1, 0, 0,
+ 0, 0, 0, 1,
+ 0, 0, 0, 0,
+ 0, 0, 0, 1},
+ "2001:0:0:1::1");
+ put(new byte[]{
+ 32, 1, 13, -72,
+ 0, 0, 0, 0,
+ 0, 1, 0, 0,
+ 0, 0, 0, 1},
+ "2001:db8::1:0:0:1");
+ put(new byte[]{
+ 32, 1, 13, -72,
+ 0, 0, 0, 0,
+ 0, 1, 0, 0,
+ 0, 0, 0, 0},
+ "2001:db8:0:0:1::");
+ put(new byte[]{
+ 32, 1, 13, -72,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 2, 0, 0},
+ "2001:db8::2:0");
+ put(new byte[]{
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 1},
+ "::1");
+ put(new byte[]{
+ 0, 0, 0, 0,
+ 0, 0, 0, 1,
+ 0, 0, 0, 0,
+ 0, 0, 0, 1},
+ "::1:0:0:0:1");
+ put(new byte[]{
+ 0, 0, 0, 0,
+ 1, 0, 0, 1,
+ 0, 0, 0, 0,
+ 1, 0, 0, 0},
+ "::100:1:0:0:100:0");
+ put(new byte[]{
+ 32, 1, 0, 0,
+ 65, 54, -29, 120,
+ -128, 0, 99, -65,
+ 63, -1, -3, -46},
+ "2001:0:4136:e378:8000:63bf:3fff:fdd2");
+ put(new byte[]{
+ -86, -86, -69, -69,
+ -52, -52, -35, -35,
+ -18, -18, -1, -1,
+ 17, 17, 34, 34},
+ "aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222");
+ put(new byte[]{
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0,
+ 0, 0, 0, 0},
+ "::");
+ }
+ };
+
+ private static final Map<String, String> ipv4MappedToIPv6AddressStrings = new HashMap<String, String>() {
+ private static final long serialVersionUID = 1999763170377573184L;
+ {
+ // IPv4 addresses
+ put("255.255.255.255", "::ffff:255.255.255.255");
+ put("0.0.0.0", "::ffff:0.0.0.0");
+ put("127.0.0.1", "::ffff:127.0.0.1");
+ put("1.2.3.4", "::ffff:1.2.3.4");
+ put("192.168.0.1", "::ffff:192.168.0.1");
+
+ // IPv6 addresses
+ // Fully specified
+ put("2001:0:4136:e378:8000:63bf:3fff:fdd2", "2001:0:4136:e378:8000:63bf:3fff:fdd2");
+ put("aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222", "aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222");
+ put("0:0:0:0:0:0:0:0", "::");
+ put("0:0:0:0:0:0:0:1", "::1");
+
+ // Compressing at the beginning
+ put("::1:0:0:0:1", "::1:0:0:0:1");
+ put("::1:ffff:ffff", "::1:ffff:ffff");
+ put("::", "::");
+ put("::1", "::1");
+ put("::ffff", "::ffff");
+ put("::ffff:0", "::ffff:0");
+ put("::ffff:ffff", "::ffff:ffff");
+ put("::0987:9876:8765", "::987:9876:8765");
+ put("::0987:9876:8765:7654", "::987:9876:8765:7654");
+ put("::0987:9876:8765:7654:6543", "::987:9876:8765:7654:6543");
+ put("::0987:9876:8765:7654:6543:5432", "::987:9876:8765:7654:6543:5432");
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("::0987:9876:8765:7654:6543:5432:3210", "0:987:9876:8765:7654:6543:5432:3210");
+
+ // Compressing at the end
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("2001:db8:abcd:bcde:cdef:def1:ef12::", "2001:db8:abcd:bcde:cdef:def1:ef12:0");
+ put("2001:db8:abcd:bcde:cdef:def1::", "2001:db8:abcd:bcde:cdef:def1::");
+ put("2001:db8:abcd:bcde:cdef::", "2001:db8:abcd:bcde:cdef::");
+ put("2001:db8:abcd:bcde::", "2001:db8:abcd:bcde::");
+ put("2001:db8:abcd::", "2001:db8:abcd::");
+ put("2001:1234::", "2001:1234::");
+ put("2001::", "2001::");
+ put("0::", "::");
+
+ // Compressing in the middle
+ put("1234:2345::7890", "1234:2345::7890");
+ put("1234::2345:7890", "1234::2345:7890");
+ put("1234:2345:3456::7890", "1234:2345:3456::7890");
+ put("1234:2345::3456:7890", "1234:2345::3456:7890");
+ put("1234::2345:3456:7890", "1234::2345:3456:7890");
+ put("1234:2345:3456:4567::7890", "1234:2345:3456:4567::7890");
+ put("1234:2345:3456::4567:7890", "1234:2345:3456::4567:7890");
+ put("1234:2345::3456:4567:7890", "1234:2345::3456:4567:7890");
+ put("1234::2345:3456:4567:7890", "1234::2345:3456:4567:7890");
+ put("1234:2345:3456:4567:5678::7890", "1234:2345:3456:4567:5678::7890");
+ put("1234:2345:3456:4567::5678:7890", "1234:2345:3456:4567::5678:7890");
+ put("1234:2345:3456::4567:5678:7890", "1234:2345:3456::4567:5678:7890");
+ put("1234:2345::3456:4567:5678:7890", "1234:2345::3456:4567:5678:7890");
+ put("1234::2345:3456:4567:5678:7890", "1234::2345:3456:4567:5678:7890");
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("1234:2345:3456:4567:5678:6789::7890", "1234:2345:3456:4567:5678:6789:0:7890");
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("1234:2345:3456:4567:5678::6789:7890", "1234:2345:3456:4567:5678:0:6789:7890");
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("1234:2345:3456:4567::5678:6789:7890", "1234:2345:3456:4567:0:5678:6789:7890");
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("1234:2345:3456::4567:5678:6789:7890", "1234:2345:3456:0:4567:5678:6789:7890");
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("1234:2345::3456:4567:5678:6789:7890", "1234:2345:0:3456:4567:5678:6789:7890");
+ // Note the compression is removed (rfc 5952 section 4.2.2)
+ put("1234::2345:3456:4567:5678:6789:7890", "1234:0:2345:3456:4567:5678:6789:7890");
+
+ // IPv4 mapped addresses
+ put("::ffff:255.255.255.255", "::ffff:255.255.255.255");
+ put("::ffff:0.0.0.0", "::ffff:0.0.0.0");
+ put("::ffff:127.0.0.1", "::ffff:127.0.0.1");
+ put("::ffff:1.2.3.4", "::ffff:1.2.3.4");
+ put("::ffff:192.168.0.1", "::ffff:192.168.0.1");
}
};
@@ -293,4 +536,39 @@ public void testCreateByteArrayFromIpAddressString() {
assertArrayEquals(stringEntry.getValue(), NetUtil.createByteArrayFromIpAddressString(stringEntry.getKey()));
}
}
+
+ @Test
+ public void testIp6AddressToString() throws UnknownHostException {
+ for (Entry<byte[], String> testEntry : ipv6ToAddressStrings.entrySet()) {
+ assertEquals(testEntry.getValue(),
+ NetUtil.toAddressString(InetAddress.getByAddress(testEntry.getKey())));
+ }
+ }
+
+ @Test
+ public void testIp4AddressToString() throws UnknownHostException {
+ for (Entry<String, byte[]> stringEntry : validIpV4Hosts.entrySet()) {
+ assertEquals(stringEntry.getKey(),
+ NetUtil.toAddressString(InetAddress.getByAddress(stringEntry.getValue())));
+ }
+ }
+
+ @Test
+ public void testIpv4MappedIp6GetByName() {
+ for (Entry<String, String> testEntry : ipv4MappedToIPv6AddressStrings.entrySet()) {
+ assertEquals(testEntry.getValue(),
+ NetUtil.toAddressString(NetUtil.getByName(testEntry.getKey(), true), true));
+ }
+ }
+
+ @Test
+ public void testinvalidIpv4MappedIp6GetByName() {
+ for (String testEntry : invalidIpV4Hosts.keySet()) {
+ assertNull(NetUtil.getByName(testEntry, true));
+ }
+
+ for (String testEntry : invalidIpV6Hosts.keySet()) {
+ assertNull(NetUtil.getByName(testEntry, true));
+ }
+ }
}
diff --git a/microbench/src/test/java/io/netty/microbenchmark/common/NetUtilBenchmark.java b/microbench/src/test/java/io/netty/microbenchmark/common/NetUtilBenchmark.java
new file mode 100644
index 00000000000..0512fe33d88
--- /dev/null
+++ b/microbench/src/test/java/io/netty/microbenchmark/common/NetUtilBenchmark.java
@@ -0,0 +1,226 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.microbenchmark.common;
+
+import io.netty.microbench.util.AbstractMicrobenchmark;
+import io.netty.util.NetUtil;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Threads;
+import org.openjdk.jmh.annotations.Warmup;
+
+@Threads(4)
+@Warmup(iterations = 10)
+@Measurement(iterations = 10)
+public class NetUtilBenchmark extends AbstractMicrobenchmark {
+
+ @Benchmark
+ public void useGetByNameIpv4() {
+ for (String testEntry : invalidIpV4Hosts.keySet()) {
+ if (NetUtil.getByName(testEntry, true) != null) {
+ throw new RuntimeException("error");
+ }
+ }
+ }
+
+ @Benchmark
+ public void useGetByNameIpv6() {
+ for (String testEntry : invalidIpV6Hosts.keySet()) {
+ if (NetUtil.getByName(testEntry, true) != null) {
+ throw new RuntimeException("error");
+ }
+ }
+ }
+
+ @Benchmark
+ public void useIsValidIpv6() {
+ for (String host : invalidIpV6Hosts.keySet()) {
+ if (NetUtil.isValidIpV6Address(host)) {
+ throw new RuntimeException("error");
+ }
+ }
+ }
+
+ @Benchmark
+ public void useIsValidIpv4() {
+ for (String host : invalidIpV4Hosts.keySet()) {
+ if (NetUtil.isValidIpV4Address(host)) {
+ throw new RuntimeException("error");
+ }
+ }
+ }
+
+ private static final Map<String, byte[]> invalidIpV4Hosts = new HashMap<String, byte[]>() {
+ private static final long serialVersionUID = 1299215199895717282L;
+ {
+ put("1.256.3.4", null);
+ put("256.0.0.1", null);
+ put("1.1.1.1.1", null);
+ put("x.255.255.255", null);
+ put("0.1:0.0", null);
+ put("0.1.0.0:", null);
+ put("127.0.0.", null);
+ put("1.2..4", null);
+ put("192.0.1", null);
+ put("192.0.1.1.1", null);
+ put("192.0.1.a", null);
+ put("19a.0.1.1", null);
+ put("a.0.1.1", null);
+ put(".0.1.1", null);
+ put("...", null);
+ }
+ };
+
+ private static final Map<String, byte[]> invalidIpV6Hosts = new HashMap<String, byte[]>() {
+ private static final long serialVersionUID = -5870810805409009696L;
+ {
+ // Test method with garbage.
+ put("Obvious Garbage", null);
+ // Test method with preferred style, too many :
+ put("0:1:2:3:4:5:6:7:8", null);
+ // Test method with preferred style, not enough :
+ put("0:1:2:3:4:5:6", null);
+ // Test method with preferred style, bad digits.
+ put("0:1:2:3:4:5:6:x", null);
+ // Test method with preferred style, adjacent :
+ put("0:1:2:3:4:5:6::7", null);
+ // Too many : separators trailing
+ put("0:1:2:3:4:5:6:7::", null);
+ // Too many : separators leading
+ put("::0:1:2:3:4:5:6:7", null);
+ // Too many : separators trailing
+ put("1:2:3:4:5:6:7:", null);
+ // Too many : separators leading
+ put(":1:2:3:4:5:6:7", null);
+ // Too many : separators leading 0
+ put("0::1:2:3:4:5:6:7", null);
+ // Test method with preferred style, too many digits.
+ put("0:1:2:3:4:5:6:789abcdef", null);
+ // Test method with compressed style, bad digits.
+ put("0:1:2:3::x", null);
+ // Test method with compressed style, too many adjacent :
+ put("0:1:2:::3", null);
+ // Test method with compressed style, too many digits.
+ put("0:1:2:3::abcde", null);
+ // Test method with preferred style, too many :
+ put("0:1:2:3:4:5:6:7:8", null);
+ // Test method with compressed style, not enough :
+ put("0:1", null);
+ // Test method with ipv4 style, bad ipv6 digits.
+ put("0:0:0:0:0:x:10.0.0.1", null);
+ // Test method with ipv4 style, bad ipv4 digits.
+ put("0:0:0:0:0:0:10.0.0.x", null);
+ // Test method with ipv4 style, adjacent :
+ put("0:0:0:0:0::0:10.0.0.1", null);
+ // Test method with ipv4 style, too many ipv6 digits.
+ put("0:0:0:0:0:00000:10.0.0.1", null);
+ // Test method with ipv4 style, too many :
+ put("0:0:0:0:0:0:0:10.0.0.1", null);
+ // Test method with ipv4 style, not enough :
+ put("0:0:0:0:0:10.0.0.1", null);
+ // Test method with ipv4 style, too many .
+ put("0:0:0:0:0:0:10.0.0.0.1", null);
+ // Test method with ipv4 style, not enough .
+ put("0:0:0:0:0:0:10.0.1", null);
+ // Test method with ipv4 style, adjacent .
+ put("0:0:0:0:0:0:10..0.0.1", null);
+ // Test method with ipv4 style, leading .
+ put("0:0:0:0:0:0:.0.0.1", null);
+ // Test method with ipv4 style, leading .
+ put("0:0:0:0:0:0:.10.0.0.1", null);
+ // Test method with ipv4 style, trailing .
+ put("0:0:0:0:0:0:10.0.0.", null);
+ // Test method with ipv4 style, trailing .
+ put("0:0:0:0:0:0:10.0.0.1.", null);
+ // Test method with compressed ipv4 style, bad ipv6 digits.
+ put("::fffx:192.168.0.1", null);
+ // Test method with compressed ipv4 style, bad ipv4 digits.
+ put("::ffff:192.168.0.x", null);
+ // Test method with compressed ipv4 style, too many adjacent :
+ put(":::ffff:192.168.0.1", null);
+ // Test method with compressed ipv4 style, too many ipv6 digits.
+ put("::fffff:192.168.0.1", null);
+ // Test method with compressed ipv4 style, too many ipv4 digits.
+ put("::ffff:1923.168.0.1", null);
+ // Test method with compressed ipv4 style, not enough :
+ put(":ffff:192.168.0.1", null);
+ // Test method with compressed ipv4 style, too many .
+ put("::ffff:192.168.0.1.2", null);
+ // Test method with compressed ipv4 style, not enough .
+ put("::ffff:192.168.0", null);
+ // Test method with compressed ipv4 style, adjacent .
+ put("::ffff:192.168..0.1", null);
+ // Test method, garbage.
+ put("absolute, and utter garbage", null);
+ // Test method, bad ipv6 digits.
+ put("x:0:0:0:0:0:10.0.0.1", null);
+ // Test method, bad ipv4 digits.
+ put("0:0:0:0:0:0:x.0.0.1", null);
+ // Test method, too many ipv6 digits.
+ put("00000:0:0:0:0:0:10.0.0.1", null);
+ // Test method, too many ipv4 digits.
+ put("0:0:0:0:0:0:10.0.0.1000", null);
+ // Test method, too many :
+ put("0:0:0:0:0:0:0:10.0.0.1", null);
+ // Test method, not enough :
+ put("0:0:0:0:0:10.0.0.1", null);
+ // Test method, out of order trailing :
+ put("0:0:0:0:0:10.0.0.1:", null);
+ // Test method, out of order leading :
+ put(":0:0:0:0:0:10.0.0.1", null);
+ // Test method, out of order leading :
+ put("0:0:0:0::10.0.0.1:", null);
+ // Test method, out of order trailing :
+ put(":0:0:0:0::10.0.0.1", null);
+ // Test method, too many .
+ put("0:0:0:0:0:0:10.0.0.0.1", null);
+ // Test method, not enough .
+ put("0:0:0:0:0:0:10.0.1", null);
+ // Test method, adjacent .
+ put("0:0:0:0:0:0:10.0.0..1", null);
+ // Double compression symbol
+ put("::0::", null);
+ // Empty contents
+ put("", null);
+ // Trailing : (max number of : = 8)
+ put("2001:0:4136:e378:8000:63bf:3fff:fdd2:", null);
+ // Leading : (max number of : = 8)
+ put(":aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222", null);
+ // Invalid character
+ put("1234:2345:3456:4567:5678:6789::X890", null);
+ // Trailing . in IPv4
+ put("::ffff:255.255.255.255.", null);
+ // To many characters in IPv4
+ put("::ffff:0.0.1111.0", null);
+ // Test method, adjacent .
+ put("::ffff:0.0..0", null);
+ // Not enough IPv4 entries trailing .
+ put("::ffff:127.0.0.", null);
+ // Not enough IPv4 entries no trailing .
+ put("::ffff:1.2.4", null);
+ // Extra IPv4 entry
+ put("::ffff:192.168.0.1.255", null);
+ // Not enough IPv6 content
+ put(":ffff:192.168.0.1.255", null);
+ // Intermixed IPv4 and IPv6 symbols
+ put("::ffff:255.255:255.255.", null);
+ }
+ };
+}
| val | train | 2014-09-22T15:12:20 | 2014-09-04T08:00:13Z | Scottmitch | val |
netty/netty/2889_2890 | netty/netty | netty/netty/2889 | netty/netty/2890 | [
"timestamp(timedelta=249.0, similarity=0.902667567681132)"
] | c6b2c5a3201cfd2dbd9e4244a3577950b4e9e69c | 00c0938f16921e92e5aaf175b2f38227aa9f49ee | [
"Here is my sendmmsg from bits/socket.h\n\n> /\\* Send a VLEN messages as described by VMESSAGES to socket FD.\n> Return the number of datagrams successfully written or -1 for errors.\n> This function is a cancellation point and therefore not marked with\n> __THROW. */\n> extern int sendmmsg (int __fd, struc... | [] | 2014-09-12T19:31:17Z | [
"defect"
] | trasport/native/epoll build failure | Are their version requirements listed somewhere for netty's epoll? I'm getting a build failure in some of the native code involving type redefinitions.
**OS info**
Ubuntu 12.04
> $uname -a
> Linux <name> 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
> $ mvn --version
> Apache Maven 3.1.1 (0728685237757ffbf44136acec0402957f723d9a; 2013-09-17 11:22:22-0400)
> Maven home: /home/smitchel/maven/apache-maven-3.1.1
> Java version: 1.7.0_67, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/java-7-oracle/jre
> Default locale: en_US, platform encoding: ISO-8859-1
> OS name: "linux", version: "3.2.0-67-generic", arch: "amd64", family: "unix"
**Build failure**
> [INFO] executing: /bin/sh -c make install
> [INFO] /bin/sh ./libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I./src -g -O2 -I/usr/lib/jvm/java-7-oracle/include -I/usr/lib/jvm/java-7-oracle/include/linux -c -o src/io_netty_channel_epoll_Native.lo src/io_netty_channel_epoll_Native.c
> [INFO] libtool: compile: gcc -DHAVE_CONFIG_H -I. -I./src -g -O2 -I/usr/lib/jvm/java-7-oracle/include -I/usr/lib/jvm/java-7-oracle/include/linux -c src/io_netty_channel_epoll_Native.c -fPIC -DPIC -o src/.libs/io_netty_channel_epoll_Native.o
> [INFO] src/io_netty_channel_epoll_Native.c:37:40: warning: 'struct mmsghdr' declared inside parameter list [enabled by default]
> [INFO] src/io_netty_channel_epoll_Native.c:37:12: error: conflicting types for 'sendmmsg'
> [INFO] /usr/include/x86_64-linux-gnu/bits/socket.h:439:12: note: previous declaration of 'sendmmsg' was here
> [INFO] src/io_netty_channel_epoll_Native.c: In function 'getOption':
> [INFO] src/io_netty_channel_epoll_Native.c:128:5: warning: passing argument 4 of 'getsockopt' discards 'const' qualifier from pointer target type [enabled by default]
> [INFO] /usr/include/x86_64-linux-gnu/sys/socket.h:190:12: note: expected 'void \* **restrict**' but argument is of type 'const void _'
> [INFO] src/io_netty_channel_epoll_Native.c: In function 'Java_io_netty_channel_epoll_Native_sendmmsg':
> [INFO] src/io_netty_channel_epoll_Native.c:770:8: warning: passing argument 2 of 'sendmmsg' from incompatible pointer type [enabled by default]
> [INFO] src/io_netty_channel_epoll_Native.c:37:12: note: expected 'struct mmsghdr *' but argument is of type 'struct mmsghdr *'
> [INFO] src/io_netty_channel_epoll_Native.c: In function 'Java_io_netty_channel_epoll_Native_bind':
> [INFO] src/io_netty_channel_epoll_Native.c:990:9: warning: 'return' with a value, in function returning void [enabled by default]
> [INFO] make: *_\* [src/io_netty_channel_epoll_Native.lo] Error 1
| [
"transport-native-epoll/pom.xml",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c"
] | [
"transport-native-epoll/pom.xml",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c"
] | [] | diff --git a/transport-native-epoll/pom.xml b/transport-native-epoll/pom.xml
index de1d4727578..02902caf6c4 100644
--- a/transport-native-epoll/pom.xml
+++ b/transport-native-epoll/pom.xml
@@ -78,6 +78,9 @@
<platform>.</platform>
<forceConfigure>true</forceConfigure>
<forceAutogen>true</forceAutogen>
+ <configureArgs>
+ <arg>${jni.compiler.args}</arg>
+ </configureArgs>
</configuration>
<goals>
<goal>generate</goal>
@@ -112,6 +115,104 @@
</execution>
</executions>
</plugin>
+
+ <plugin>
+ <artifactId>maven-antrun-plugin</artifactId>
+ <version>1.7</version>
+ <executions>
+ <execution>
+ <!-- Phase must be before regex-glibc-sendmmsg and regex-linux-sendmmsg -->
+ <phase>validate</phase>
+ <goals>
+ <goal>run</goal>
+ </goals>
+ <id>ant-get-systeminfo</id>
+ <configuration>
+ <exportAntProperties>true</exportAntProperties>
+ <tasks>
+ <exec executable="sh" outputproperty="ldd_version">
+ <arg value="-c"/>
+ <arg value="ldd --version | head -1"/>
+ </exec>
+ <exec executable="uname" outputproperty="uname_os_version">
+ <arg value="-r"/>
+ </exec>
+ </tasks>
+ </configuration>
+ </execution>
+ </executions>
+ </plugin>
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>build-helper-maven-plugin</artifactId>
+ <version>1.7</version>
+ <executions>
+ <execution>
+ <!-- Phase must be before regex-combined-sendmmsg -->
+ <phase>initialize</phase>
+ <id>regex-glibc-sendmmsg</id>
+ <goals>
+ <goal>regex-property</goal>
+ </goals>
+ <configuration>
+ <name>glibc.sendmmsg.support</name>
+ <value>${ldd_version}</value>
+ <!-- Version must be >= 2.14 - set to IO_NETTY_SENDMSSG_NOT_FOUND if this version is not satisfied -->
+ <regex>^((?!^[^)]+\)\s+(0*2\.1[4-9]|0*2\.[2-9][0-9]+|0*[3-9][0-9]*|0*[1-9]+[0-9]+).*).)*$</regex>
+ <replacement>IO_NETTY_SENDMSSG_NOT_FOUND</replacement>
+ <failIfNoMatch>false</failIfNoMatch>
+ </configuration>
+ </execution>
+ <execution>
+ <!-- Phase must be before regex-combined-sendmmsg -->
+ <phase>initialize</phase>
+ <id>regex-linux-sendmmsg</id>
+ <goals>
+ <goal>regex-property</goal>
+ </goals>
+ <configuration>
+ <name>linux.sendmmsg.support</name>
+ <value>${uname_os_version}</value>
+ <!-- Version must be >= 3 - set to IO_NETTY_SENDMSSG_NOT_FOUND if this version is not satisfied -->
+ <regex>^((?!^[0-9]*[3-9]\.?.*).)*$</regex>
+ <replacement>IO_NETTY_SENDMSSG_NOT_FOUND</replacement>
+ <failIfNoMatch>false</failIfNoMatch>
+ </configuration>
+ </execution>
+ <execution>
+ <!-- Phase must be before regex-unset-if-needed-sendmmsg -->
+ <phase>generate-sources</phase>
+ <id>regex-combined-sendmmsg</id>
+ <goals>
+ <goal>regex-property</goal>
+ </goals>
+ <configuration>
+ <name>jni.compiler.args</name>
+ <value>${linux.sendmmsg.support}${glibc.sendmmsg.support}</value>
+ <!-- If glibc and linux kernel are both not sufficient...then define the CFLAGS -->
+ <regex>.*IO_NETTY_SENDMSSG_NOT_FOUND.*</regex>
+ <replacement>CFLAGS="-DIO_NETTY_SENDMMSG_NOT_FOUND"</replacement>
+ <failIfNoMatch>false</failIfNoMatch>
+ </configuration>
+ </execution>
+ <execution>
+ <!-- Phase must be before build-native-lib -->
+ <phase>generate-sources</phase>
+ <id>regex-unset-if-needed-sendmmsg</id>
+ <goals>
+ <goal>regex-property</goal>
+ </goals>
+ <configuration>
+ <name>jni.compiler.args</name>
+ <value>${jni.compiler.args}</value>
+ <!-- If glibc and linux kernel are both not sufficient...then define the CFLAGS -->
+ <regex>^((?!CFLAGS=).)*$</regex>
+ <replacement>CFLAGS=""</replacement>
+ <failIfNoMatch>false</failIfNoMatch>
+ </configuration>
+ </execution>
+ </executions>
+ </plugin>
</plugins>
</build>
</project>
diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index 5dc7eb5825d..03501c5a93f 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -13,6 +13,7 @@
* License for the specific language governing permissions and limitations
* under the License.
*/
+#define _GNU_SOURCE
#include <jni.h>
#include <stdlib.h>
#include <string.h>
@@ -30,19 +31,20 @@
#include <sys/utsname.h>
#include "io_netty_channel_epoll_Native.h"
-
// optional
extern int accept4(int sockFd, struct sockaddr *addr, socklen_t *addrlen, int flags) __attribute__((weak));
extern int epoll_create1(int flags) __attribute__((weak));
+
+#ifdef IO_NETTY_SENDMMSG_NOT_FOUND
extern int sendmmsg(int sockfd, struct mmsghdr *msgvec, unsigned int vlen, unsigned int flags) __attribute__((weak));
-// Just define it here and NOT use #define _GNU_SOURCE as we also want to be able to build on systems that not support
-// sendmmsg yet. The problem is if we use _GNU_SOURCE we will not be able to declare sendmmsg as extern
+#ifndef __USE_GNU
struct mmsghdr {
struct msghdr msg_hdr; /* Message header */
unsigned int msg_len; /* Number of bytes transmitted */
};
-
+#endif
+#endif
// Those are initialized in the init(...) method and cached for performance reasons
jmethodID updatePosId = NULL;
@@ -123,7 +125,7 @@ jint epollCtl(JNIEnv * env, jint efd, int op, jint fd, jint flags, jint id) {
return epoll_ctl(efd, op, fd, &ev);
}
-jint getOption(JNIEnv *env, jint fd, int level, int optname, const void *optval, socklen_t optlen) {
+jint getOption(JNIEnv *env, jint fd, int level, int optname, void *optval, socklen_t optlen) {
int code;
code = getsockopt(fd, level, optname, optval, &optlen);
if (code == 0) {
@@ -987,7 +989,7 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_socketStream(JNIEnv *
JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_bind(JNIEnv * env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port) {
struct sockaddr_storage addr;
if (init_sockaddr(env, address, scopeId, port, &addr) == -1) {
- return -1;
+ return;
}
if(bind(fd, (struct sockaddr *) &addr, sizeof(addr)) == -1){
| null | train | train | 2014-09-15T15:14:15 | 2014-09-12T17:40:00Z | Scottmitch | val |
netty/netty/2906_2907 | netty/netty | netty/netty/2906 | netty/netty/2907 | [
"timestamp(timedelta=15.0, similarity=0.9111227419072077)"
] | 764e6c3bb719b3542c3ae20db61cccec8122e0cb | 66c3073f21b60dd728f59cecc668ce74d012f155 | [
"@nmittler Do you see any issues with moving the `Http2InboundFlowController` from `AbstractHttp2ConnectionHandler` to `DefaultHttp2FrameReader`?\n\nIf this is OK then for consistency I may also doing the same thing for the `Http2OutboundFlowController` (moving it to `DefaultHttp2FrameWriter`)\n",
"@nmittler - On... | [
"Maybe we should have a base class called Http2FrameListenerDecorator? WDYT?\n",
"maybe just return here ... so no need for else?\n",
"Should we do this in a finally block?\n",
"May be just my personal preference, so feel free to ignore .... maybe consider if (stream == null ) return ... that way the rest of ... | 2014-09-17T22:44:31Z | [
"defect"
] | HTTP/2 Decompressor flow control interaction error | The `DecompressorHttp2FrameReader` is intercepting and decompressing `ByteBuf` objects before the `onDataRead` method is called. However the uncompressed size of the content is being counted against the flow control window instead of the compressed size (which was actually sent over the wire). This will result in `PROTOCOL_ERROR` or `FLOW_CONTROL_ERROR` messages being incorrectly sent.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DecompressorHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListenerDecorator.java",
"example/src/main/java/io/netty/e... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DecompressorHttp2FrameReader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DecompressorHttp2FrameReader.java
deleted file mode 100644
index 5de289320f0..00000000000
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DecompressorHttp2FrameReader.java
+++ /dev/null
@@ -1,251 +0,0 @@
-/*
- * Copyright 2014 The Netty Project
- *
- * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at:
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package io.netty.handler.codec.http2;
-
-import static io.netty.handler.codec.http.HttpHeaders.Names.CONTENT_ENCODING;
-import static io.netty.handler.codec.http.HttpHeaders.Names.CONTENT_LENGTH;
-import static io.netty.handler.codec.http.HttpHeaders.Values.DEFLATE;
-import static io.netty.handler.codec.http.HttpHeaders.Values.GZIP;
-import static io.netty.handler.codec.http.HttpHeaders.Values.IDENTITY;
-import static io.netty.handler.codec.http.HttpHeaders.Values.XDEFLATE;
-import static io.netty.handler.codec.http.HttpHeaders.Values.XGZIP;
-import io.netty.buffer.ByteBuf;
-import io.netty.buffer.Unpooled;
-import io.netty.channel.ChannelHandlerContext;
-import io.netty.channel.embedded.EmbeddedChannel;
-import io.netty.handler.codec.AsciiString;
-import io.netty.handler.codec.ByteToMessageDecoder;
-import io.netty.handler.codec.compression.ZlibCodecFactory;
-import io.netty.handler.codec.compression.ZlibWrapper;
-import io.netty.handler.codec.http.HttpHeaders;
-
-/**
- * A HTTP2 frame reader that will decompress data frames according
- * to the {@code content-encoding} header for each stream.
- */
-public class DecompressorHttp2FrameReader extends DefaultHttp2FrameReader {
- private static final AsciiString CONTENT_ENCODING_LOWER_CASE = CONTENT_ENCODING.toLowerCase();
- private static final AsciiString CONTENT_LENGTH_LOWER_CASE = CONTENT_LENGTH.toLowerCase();
- private static final Http2ConnectionAdapter CLEAN_UP_LISTENER = new Http2ConnectionAdapter() {
- @Override
- public void streamRemoved(Http2Stream stream) {
- final EmbeddedChannel decoder = stream.decompressor();
- if (decoder != null) {
- cleanup(stream, decoder);
- }
- }
- };
-
- private final Http2Connection connection;
- private final boolean strict;
-
- /**
- * Create a new instance with non-strict deflate decoding.
- * {@link #DecompressorHttp2FrameReader(Http2Connection, boolean)}
- */
- public DecompressorHttp2FrameReader(Http2Connection connection) {
- this(connection, false);
- }
-
- /**
- * Create a new instance.
- * @param strict
- * <ul>
- * <li>{@code true} to use use strict handling of deflate if used</li>
- * <li>{@code false} be more lenient with decompression</li>
- * </ul>
- */
- public DecompressorHttp2FrameReader(Http2Connection connection, boolean strict) {
- this.connection = connection;
- this.strict = strict;
-
- connection.addListener(CLEAN_UP_LISTENER);
- }
-
- /**
- * Returns a new {@link EmbeddedChannel} that decodes the HTTP2 message
- * content encoded in the specified {@code contentEncoding}.
- *
- * @param contentEncoding the value of the {@code content-encoding} header
- * @return a new {@link ByteToMessageDecoder} if the specified encoding is supported.
- * {@code null} otherwise (alternatively, you can throw a {@link Http2Exception}
- * to block unknown encoding).
- * @throws Http2Exception If the specified encoding is not not supported and warrants an exception
- */
- protected EmbeddedChannel newContentDecoder(CharSequence contentEncoding) throws Http2Exception {
- if (GZIP.equalsIgnoreCase(contentEncoding) ||
- XGZIP.equalsIgnoreCase(contentEncoding)) {
- return new EmbeddedChannel(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP));
- }
- if (DEFLATE.equalsIgnoreCase(contentEncoding) ||
- XDEFLATE.equalsIgnoreCase(contentEncoding)) {
- final ZlibWrapper wrapper = strict ? ZlibWrapper.ZLIB : ZlibWrapper.ZLIB_OR_NONE;
- // To be strict, 'deflate' means ZLIB, but some servers were not implemented correctly.
- return new EmbeddedChannel(ZlibCodecFactory.newZlibDecoder(wrapper));
- }
- // 'identity' or unsupported
- return null;
- }
-
- /**
- * Returns the expected content encoding of the decoded content.
- * This getMethod returns {@code "identity"} by default, which is the case for
- * most decoders.
- *
- * @param contentEncoding the value of the {@code content-encoding} header
- * @return the expected content encoding of the new content.
- * @throws Http2Exception if the {@code contentEncoding} is not supported and warrants an exception
- */
- protected AsciiString getTargetContentEncoding(
- @SuppressWarnings("UnusedParameters") CharSequence contentEncoding) throws Http2Exception {
- return HttpHeaders.Values.IDENTITY;
- }
-
- /**
- * Checks if a new decoder object is needed for the stream identified by {@code streamId}.
- * This method will modify the {@code content-encoding} header contained in {@code builder}.
- * @param streamId The identifier for the headers inside {@code builder}
- * @param builder Object representing headers which have been read
- * @param endOfStream Indicates if the stream has ended
- * @throws Http2Exception If the {@code content-encoding} is not supported
- */
- private void initDecoder(int streamId, Http2Headers headers, boolean endOfStream)
- throws Http2Exception {
- // Convert the names into a case-insensitive map.
- final Http2Stream stream = connection.stream(streamId);
- if (stream != null) {
- EmbeddedChannel decoder = stream.decompressor();
- if (decoder == null) {
- if (!endOfStream) {
- // Determine the content encoding.
- AsciiString contentEncoding = headers.get(CONTENT_ENCODING_LOWER_CASE);
- if (contentEncoding == null) {
- contentEncoding = IDENTITY;
- }
- decoder = newContentDecoder(contentEncoding);
- if (decoder != null) {
- stream.decompressor(decoder);
- // Decode the content and remove or replace the existing headers
- // so that the message looks like a decoded message.
- AsciiString targetContentEncoding = getTargetContentEncoding(contentEncoding);
- if (IDENTITY.equalsIgnoreCase(targetContentEncoding)) {
- headers.remove(CONTENT_ENCODING_LOWER_CASE);
- } else {
- headers.set(CONTENT_ENCODING_LOWER_CASE, targetContentEncoding);
- }
- }
- }
- } else if (endOfStream) {
- cleanup(stream, decoder);
- }
- if (decoder != null) {
- // The content length will be for the compressed data. Since we will decompress the data
- // this content-length will not be correct. Instead of queuing messages or delaying sending
- // header frames...just remove the content-length header
- headers.remove(CONTENT_LENGTH_LOWER_CASE);
- }
- }
- }
-
- /**
- * Release remaining content from the {@link EmbeddedChannel} and remove the decoder from the {@link Http2Stream}.
- * @param stream The stream for which {@code decoder} is the decompressor for
- * @param decoder The decompressor for {@code stream}
- */
- private static void cleanup(Http2Stream stream, EmbeddedChannel decoder) {
- if (decoder.finish()) {
- for (;;) {
- final ByteBuf buf = decoder.readInbound();
- if (buf == null) {
- break;
- }
- buf.release();
- }
- }
- stream.decompressor(null);
- }
-
- /**
- * Read the next decoded {@link ByteBuf} from the {@link EmbeddedChannel} or {@code null} if one does not exist.
- * @param decoder The channel to read from
- * @return The next decoded {@link ByteBuf} from the {@link EmbeddedChannel} or {@code null} if one does not exist
- */
- private static ByteBuf nextReadableBuf(EmbeddedChannel decoder) {
- for (;;) {
- final ByteBuf buf = decoder.readInbound();
- if (buf == null) {
- return null;
- }
- if (!buf.isReadable()) {
- buf.release();
- continue;
- }
- return buf;
- }
- }
-
- @Override
- protected void notifyListenerOnDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, int padding,
- boolean endOfStream, Http2FrameListener listener) throws Http2Exception {
- final Http2Stream stream = connection.stream(streamId);
- final EmbeddedChannel decoder = stream == null ? null : stream.decompressor();
- if (decoder == null) {
- super.notifyListenerOnDataRead(ctx, streamId, data, padding, endOfStream, listener);
- } else {
- // call retain here as it will call release after its written to the channel
- decoder.writeInbound(data.retain());
- ByteBuf buf = nextReadableBuf(decoder);
- if (buf == null) {
- if (endOfStream) {
- super.notifyListenerOnDataRead(ctx, streamId, Unpooled.EMPTY_BUFFER, padding, true, listener);
- }
- // END_STREAM is not set and the data could not be decoded yet.
- // The assumption has to be there will be more data frames to complete the decode.
- // We don't have enough information here to know if this is an error.
- } else {
- for (;;) {
- final ByteBuf nextBuf = nextReadableBuf(decoder);
- if (nextBuf == null) {
- super.notifyListenerOnDataRead(ctx, streamId, buf, padding, endOfStream, listener);
- break;
- } else {
- super.notifyListenerOnDataRead(ctx, streamId, buf, padding, false, listener);
- }
- buf = nextBuf;
- }
- }
-
- if (endOfStream) {
- cleanup(stream, decoder);
- }
- }
- }
-
- @Override
- protected void notifyListenerOnHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers,
- int streamDependency, short weight, boolean exclusive, int padding, boolean endOfStream,
- Http2FrameListener listener) throws Http2Exception {
- initDecoder(streamId, headers, endOfStream);
- super.notifyListenerOnHeadersRead(ctx, streamId, headers, streamDependency, weight,
- exclusive, padding, endOfStream, listener);
- }
-
- @Override
- protected void notifyListenerOnHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers,
- int padding, boolean endOfStream, Http2FrameListener listener) throws Http2Exception {
- initDecoder(streamId, headers, endOfStream);
- super.notifyListenerOnHeadersRead(ctx, streamId, headers, padding, endOfStream, listener);
- }
-}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
index 259cfa5fdcc..18f05c4604d 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2FrameReader.java
@@ -367,23 +367,6 @@ private void verifyContinuationFrame() throws Http2Exception {
}
}
- protected void notifyListenerOnDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data,
- int padding, boolean endOfStream, Http2FrameListener listener) throws Http2Exception {
- listener.onDataRead(ctx, streamId, data, padding, endOfStream);
- }
-
- protected void notifyListenerOnHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers,
- int streamDependency, short weight, boolean exclusive, int padding,
- boolean endOfStream, Http2FrameListener listener) throws Http2Exception {
- listener.onHeadersRead(ctx, streamId, headers, streamDependency,
- weight, exclusive, padding, endOfStream);
- }
-
- protected void notifyListenerOnHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers,
- int padding, boolean endOfStream, Http2FrameListener listener) throws Http2Exception {
- listener.onHeadersRead(ctx, streamId, headers, padding, endOfStream);
- }
-
private void readDataFrame(ChannelHandlerContext ctx, ByteBuf payload,
Http2FrameListener listener) throws Http2Exception {
short padding = readPadding(payload);
@@ -396,7 +379,7 @@ private void readDataFrame(ChannelHandlerContext ctx, ByteBuf payload,
}
ByteBuf data = payload.readSlice(dataLength);
- notifyListenerOnDataRead(ctx, streamId, data, padding, flags.endOfStream(), listener);
+ listener.onDataRead(ctx, streamId, data, padding, flags.endOfStream());
payload.skipBytes(payload.readableBytes());
}
@@ -428,8 +411,8 @@ public void processFragment(boolean endOfHeaders, ByteBuf fragment,
final HeadersBlockBuilder hdrBlockBuilder = headersBlockBuilder();
hdrBlockBuilder.addFragment(fragment, ctx.alloc(), endOfHeaders);
if (endOfHeaders) {
- notifyListenerOnHeadersRead(ctx, headersStreamId, hdrBlockBuilder.headers(),
- streamDependency, weight, exclusive, padding, headersFlags.endOfStream(), listener);
+ listener.onHeadersRead(ctx, headersStreamId, hdrBlockBuilder.headers(),
+ streamDependency, weight, exclusive, padding, headersFlags.endOfStream());
close();
}
}
@@ -454,8 +437,8 @@ public void processFragment(boolean endOfHeaders, ByteBuf fragment,
final HeadersBlockBuilder hdrBlockBuilder = headersBlockBuilder();
hdrBlockBuilder.addFragment(fragment, ctx.alloc(), endOfHeaders);
if (endOfHeaders) {
- notifyListenerOnHeadersRead(ctx, headersStreamId, hdrBlockBuilder.headers(), padding,
- headersFlags.endOfStream(), listener);
+ listener.onHeadersRead(ctx, headersStreamId, hdrBlockBuilder.headers(), padding,
+ headersFlags.endOfStream());
close();
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
new file mode 100644
index 00000000000..4c9df927cdc
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
@@ -0,0 +1,242 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+import static io.netty.handler.codec.http.HttpHeaders.Names.CONTENT_ENCODING;
+import static io.netty.handler.codec.http.HttpHeaders.Names.CONTENT_LENGTH;
+import static io.netty.handler.codec.http.HttpHeaders.Values.DEFLATE;
+import static io.netty.handler.codec.http.HttpHeaders.Values.GZIP;
+import static io.netty.handler.codec.http.HttpHeaders.Values.IDENTITY;
+import static io.netty.handler.codec.http.HttpHeaders.Values.XDEFLATE;
+import static io.netty.handler.codec.http.HttpHeaders.Values.XGZIP;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.embedded.EmbeddedChannel;
+import io.netty.handler.codec.AsciiString;
+import io.netty.handler.codec.ByteToMessageDecoder;
+import io.netty.handler.codec.compression.ZlibCodecFactory;
+import io.netty.handler.codec.compression.ZlibWrapper;
+
+/**
+ * A HTTP2 frame listener that will decompress data frames according to the {@code content-encoding} header for each
+ * stream.
+ */
+public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecorator {
+ private static final AsciiString CONTENT_ENCODING_LOWER_CASE = CONTENT_ENCODING.toLowerCase();
+ private static final AsciiString CONTENT_LENGTH_LOWER_CASE = CONTENT_LENGTH.toLowerCase();
+ private static final Http2ConnectionAdapter CLEAN_UP_LISTENER = new Http2ConnectionAdapter() {
+ @Override
+ public void streamRemoved(Http2Stream stream) {
+ final EmbeddedChannel decompressor = stream.decompressor();
+ if (decompressor != null) {
+ cleanup(stream, decompressor);
+ }
+ }
+ };
+
+ private final Http2Connection connection;
+ private final boolean strict;
+
+ public DelegatingDecompressorFrameListener(Http2Connection connection, Http2FrameListener listener) {
+ this(connection, listener, true);
+ }
+
+ public DelegatingDecompressorFrameListener(Http2Connection connection, Http2FrameListener listener,
+ boolean strict) {
+ super(listener);
+ this.connection = connection;
+ this.strict = strict;
+
+ connection.addListener(CLEAN_UP_LISTENER);
+ }
+
+ @Override
+ public void onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, int padding, boolean endOfStream)
+ throws Http2Exception {
+ final Http2Stream stream = connection.stream(streamId);
+ final EmbeddedChannel decompressor = stream == null ? null : stream.decompressor();
+ if (decompressor == null) {
+ listener.onDataRead(ctx, streamId, data, padding, endOfStream);
+ return;
+ }
+
+ try {
+ // call retain here as it will call release after its written to the channel
+ decompressor.writeInbound(data.retain());
+ ByteBuf buf = nextReadableBuf(decompressor);
+ if (buf == null) {
+ if (endOfStream) {
+ listener.onDataRead(ctx, streamId, Unpooled.EMPTY_BUFFER, padding, true);
+ }
+ // END_STREAM is not set and the data could not be decoded yet.
+ // The assumption has to be there will be more data frames to complete the decode.
+ // We don't have enough information here to know if this is an error.
+ } else {
+ for (;;) {
+ final ByteBuf nextBuf = nextReadableBuf(decompressor);
+ if (nextBuf == null) {
+ listener.onDataRead(ctx, streamId, buf, padding, endOfStream);
+ break;
+ } else {
+ listener.onDataRead(ctx, streamId, buf, padding, false);
+ }
+ buf = nextBuf;
+ }
+ }
+ } finally {
+ if (endOfStream) {
+ cleanup(stream, decompressor);
+ }
+ }
+ }
+
+ @Override
+ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int padding,
+ boolean endStream) throws Http2Exception {
+ initDecompressor(streamId, headers, endStream);
+ listener.onHeadersRead(ctx, streamId, headers, padding, endStream);
+ }
+
+ @Override
+ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int streamDependency,
+ short weight, boolean exclusive, int padding, boolean endStream) throws Http2Exception {
+ initDecompressor(streamId, headers, endStream);
+ listener.onHeadersRead(ctx, streamId, headers, streamDependency, weight, exclusive, padding, endStream);
+ }
+
+ /**
+ * Returns a new {@link EmbeddedChannel} that decodes the HTTP2 message content encoded in the specified
+ * {@code contentEncoding}.
+ *
+ * @param contentEncoding the value of the {@code content-encoding} header
+ * @return a new {@link ByteToMessageDecoder} if the specified encoding is supported. {@code null} otherwise
+ * (alternatively, you can throw a {@link Http2Exception} to block unknown encoding).
+ * @throws Http2Exception If the specified encoding is not not supported and warrants an exception
+ */
+ protected EmbeddedChannel newContentDecompressor(AsciiString contentEncoding) throws Http2Exception {
+ if (GZIP.equalsIgnoreCase(contentEncoding) || XGZIP.equalsIgnoreCase(contentEncoding)) {
+ return new EmbeddedChannel(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP));
+ }
+ if (DEFLATE.equalsIgnoreCase(contentEncoding) || XDEFLATE.equalsIgnoreCase(contentEncoding)) {
+ final ZlibWrapper wrapper = strict ? ZlibWrapper.ZLIB : ZlibWrapper.ZLIB_OR_NONE;
+ // To be strict, 'deflate' means ZLIB, but some servers were not implemented correctly.
+ return new EmbeddedChannel(ZlibCodecFactory.newZlibDecoder(wrapper));
+ }
+ // 'identity' or unsupported
+ return null;
+ }
+
+ /**
+ * Returns the expected content encoding of the decoded content. This getMethod returns {@code "identity"} by
+ * default, which is the case for most decompressors.
+ *
+ * @param contentEncoding the value of the {@code content-encoding} header
+ * @return the expected content encoding of the new content.
+ * @throws Http2Exception if the {@code contentEncoding} is not supported and warrants an exception
+ */
+ protected AsciiString getTargetContentEncoding(@SuppressWarnings("UnusedParameters") AsciiString contentEncoding)
+ throws Http2Exception {
+ return IDENTITY;
+ }
+
+ /**
+ * Checks if a new decompressor object is needed for the stream identified by {@code streamId}.
+ * This method will modify the {@code content-encoding} header contained in {@code headers}.
+ *
+ * @param streamId The identifier for the headers inside {@code headers}
+ * @param headers Object representing headers which have been read
+ * @param endOfStream Indicates if the stream has ended
+ * @throws Http2Exception If the {@code content-encoding} is not supported
+ */
+ private void initDecompressor(int streamId, Http2Headers headers, boolean endOfStream) throws Http2Exception {
+ final Http2Stream stream = connection.stream(streamId);
+ if (stream == null) {
+ return;
+ }
+
+ EmbeddedChannel decompressor = stream.decompressor();
+ if (decompressor == null) {
+ if (!endOfStream) {
+ // Determine the content encoding.
+ AsciiString contentEncoding = headers.get(CONTENT_ENCODING_LOWER_CASE);
+ if (contentEncoding == null) {
+ contentEncoding = IDENTITY;
+ }
+ decompressor = newContentDecompressor(contentEncoding);
+ if (decompressor != null) {
+ stream.decompressor(decompressor);
+ // Decode the content and remove or replace the existing headers
+ // so that the message looks like a decoded message.
+ AsciiString targetContentEncoding = getTargetContentEncoding(contentEncoding);
+ if (IDENTITY.equalsIgnoreCase(targetContentEncoding)) {
+ headers.remove(CONTENT_ENCODING_LOWER_CASE);
+ } else {
+ headers.set(CONTENT_ENCODING_LOWER_CASE, targetContentEncoding);
+ }
+ }
+ }
+ } else if (endOfStream) {
+ cleanup(stream, decompressor);
+ }
+ if (decompressor != null) {
+ // The content length will be for the compressed data. Since we will decompress the data
+ // this content-length will not be correct. Instead of queuing messages or delaying sending
+ // header frames...just remove the content-length header
+ headers.remove(CONTENT_LENGTH_LOWER_CASE);
+ }
+ }
+
+ /**
+ * Release remaining content from the {@link EmbeddedChannel} and remove the decompressor
+ * from the {@link Http2Stream}.
+ *
+ * @param stream The stream for which {@code decompressor} is the decompressor for
+ * @param decompressor The decompressor for {@code stream}
+ */
+ private static void cleanup(Http2Stream stream, EmbeddedChannel decompressor) {
+ if (decompressor.finish()) {
+ for (;;) {
+ final ByteBuf buf = decompressor.readInbound();
+ if (buf == null) {
+ break;
+ }
+ buf.release();
+ }
+ }
+ stream.decompressor(null);
+ }
+
+ /**
+ * Read the next decompressed {@link ByteBuf} from the {@link EmbeddedChannel}
+ * or {@code null} if one does not exist.
+ *
+ * @param decompressor The channel to read from
+ * @return The next decoded {@link ByteBuf} from the {@link EmbeddedChannel} or {@code null} if one does not exist
+ */
+ private static ByteBuf nextReadableBuf(EmbeddedChannel decompressor) {
+ for (;;) {
+ final ByteBuf buf = decompressor.readInbound();
+ if (buf == null) {
+ return null;
+ }
+ if (!buf.isReadable()) {
+ buf.release();
+ continue;
+ }
+ return buf;
+ }
+ }
+}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListenerDecorator.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListenerDecorator.java
new file mode 100644
index 00000000000..453fc35a5e3
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListenerDecorator.java
@@ -0,0 +1,105 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+
+/**
+ * Provides a decorator around a {@link Http2FrameListener} and delegates all method calls
+ */
+public class Http2FrameListenerDecorator implements Http2FrameListener {
+ protected final Http2FrameListener listener;
+
+ public Http2FrameListenerDecorator(Http2FrameListener listener) {
+ if (listener == null) {
+ throw new NullPointerException("listener");
+ }
+ this.listener = listener;
+ }
+
+ @Override
+ public void onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, int padding, boolean endOfStream)
+ throws Http2Exception {
+ listener.onDataRead(ctx, streamId, data, padding, endOfStream);
+ }
+
+ @Override
+ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int padding,
+ boolean endStream) throws Http2Exception {
+ listener.onHeadersRead(ctx, streamId, headers, padding, endStream);
+ }
+
+ @Override
+ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int streamDependency,
+ short weight, boolean exclusive, int padding, boolean endStream) throws Http2Exception {
+ listener.onHeadersRead(ctx, streamId, headers, streamDependency, weight, exclusive, padding, endStream);
+ }
+
+ @Override
+ public void onPriorityRead(ChannelHandlerContext ctx, int streamId, int streamDependency, short weight,
+ boolean exclusive) throws Http2Exception {
+ listener.onPriorityRead(ctx, streamId, streamDependency, weight, exclusive);
+ }
+
+ @Override
+ public void onRstStreamRead(ChannelHandlerContext ctx, int streamId, long errorCode) throws Http2Exception {
+ listener.onRstStreamRead(ctx, streamId, errorCode);
+ }
+
+ @Override
+ public void onSettingsAckRead(ChannelHandlerContext ctx) throws Http2Exception {
+ listener.onSettingsAckRead(ctx);
+ }
+
+ @Override
+ public void onSettingsRead(ChannelHandlerContext ctx, Http2Settings settings) throws Http2Exception {
+ listener.onSettingsRead(ctx, settings);
+ }
+
+ @Override
+ public void onPingRead(ChannelHandlerContext ctx, ByteBuf data) throws Http2Exception {
+ listener.onPingRead(ctx, data);
+ }
+
+ @Override
+ public void onPingAckRead(ChannelHandlerContext ctx, ByteBuf data) throws Http2Exception {
+ listener.onPingAckRead(ctx, data);
+ }
+
+ @Override
+ public void onPushPromiseRead(ChannelHandlerContext ctx, int streamId, int promisedStreamId, Http2Headers headers,
+ int padding) throws Http2Exception {
+ listener.onPushPromiseRead(ctx, streamId, promisedStreamId, headers, padding);
+ }
+
+ @Override
+ public void onGoAwayRead(ChannelHandlerContext ctx, int lastStreamId, long errorCode, ByteBuf debugData)
+ throws Http2Exception {
+ listener.onGoAwayRead(ctx, lastStreamId, errorCode, debugData);
+ }
+
+ @Override
+ public void onWindowUpdateRead(ChannelHandlerContext ctx, int streamId, int windowSizeIncrement)
+ throws Http2Exception {
+ listener.onWindowUpdateRead(ctx, streamId, windowSizeIncrement);
+ }
+
+ @Override
+ public void onUnknownFrame(ChannelHandlerContext ctx, byte frameType, int streamId, Http2Flags flags,
+ ByteBuf payload) {
+ listener.onUnknownFrame(ctx, frameType, streamId, flags, payload);
+ }
+}
diff --git a/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java b/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
index 71e2efa33ec..77041db03e6 100644
--- a/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
+++ b/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
@@ -25,11 +25,12 @@
import io.netty.handler.codec.http.HttpClientUpgradeHandler;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpVersion;
-import io.netty.handler.codec.http2.DecompressorHttp2FrameReader;
import io.netty.handler.codec.http2.DefaultHttp2Connection;
+import io.netty.handler.codec.http2.DefaultHttp2FrameReader;
import io.netty.handler.codec.http2.DefaultHttp2FrameWriter;
import io.netty.handler.codec.http2.DefaultHttp2InboundFlowController;
import io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController;
+import io.netty.handler.codec.http2.DelegatingDecompressorFrameListener;
import io.netty.handler.codec.http2.DelegatingHttp2ConnectionHandler;
import io.netty.handler.codec.http2.DelegatingHttp2HttpConnectionHandler;
import io.netty.handler.codec.http2.Http2ClientUpgradeCodec;
@@ -66,10 +67,11 @@ public void initChannel(SocketChannel ch) throws Exception {
final Http2Connection connection = new DefaultHttp2Connection(false);
final Http2FrameWriter frameWriter = frameWriter();
connectionHandler = new DelegatingHttp2HttpConnectionHandler(connection,
- frameReader(connection), frameWriter,
+ frameReader(), frameWriter,
new DefaultHttp2InboundFlowController(connection, frameWriter),
new DefaultHttp2OutboundFlowController(connection, frameWriter),
- InboundHttp2ToHttpAdapter.newInstance(connection, maxContentLength));
+ new DelegatingDecompressorFrameListener(connection,
+ InboundHttp2ToHttpAdapter.newInstance(connection, maxContentLength)));
responseHandler = new HttpResponseHandler();
settingsHandler = new Http2SettingsHandler(ch.newPromise());
if (sslCtx != null) {
@@ -146,8 +148,8 @@ public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exc
}
}
- private static Http2FrameReader frameReader(Http2Connection connection) {
- return new Http2InboundFrameLogger(new DecompressorHttp2FrameReader(connection), logger);
+ private static Http2FrameReader frameReader() {
+ return new Http2InboundFrameLogger(new DefaultHttp2FrameReader(), logger);
}
private static Http2FrameWriter frameWriter() {
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
index e07baabee0d..53357f9b37e 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
@@ -80,6 +80,7 @@ public class DataCompressionHttp2Test {
private ServerBootstrap sb;
private Bootstrap cb;
private Channel serverChannel;
+ private Channel serverConnectedChannel;
private Channel clientChannel;
private CountDownLatch serverLatch;
private CountDownLatch clientLatch;
@@ -105,10 +106,12 @@ public void setup() throws InterruptedException {
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
- serverAdapter = new Http2TestUtil.FrameAdapter(serverConnection, new DecompressorHttp2FrameReader(
- serverConnection), serverListener, serverLatch, false);
+ serverAdapter = new Http2TestUtil.FrameAdapter(serverConnection,
+ new DelegatingDecompressorFrameListener(serverConnection, serverListener),
+ serverLatch, false);
p.addLast("reader", serverAdapter);
p.addLast(Http2CodecUtil.ignoreSettingsHandler());
+ serverConnectedChannel = ch;
}
});
@@ -284,9 +287,10 @@ public void run() {
}
@Test
- public void deflateEncodingSingleLargeMessage() throws Exception {
- serverLatch(new CountDownLatch(2));
- final ByteBuf data = Unpooled.buffer(1 << 16);
+ public void deflateEncodingSingleLargeMessageReducedWindow() throws Exception {
+ serverLatch(new CountDownLatch(3));
+ final int BUFFER_SIZE = 1 << 16;
+ final ByteBuf data = Unpooled.buffer(BUFFER_SIZE);
final EmbeddedChannel encoder = new EmbeddedChannel(ZlibCodecFactory.newZlibEncoder(ZlibWrapper.ZLIB));
try {
for (int i = 0; i < data.capacity(); ++i) {
@@ -296,12 +300,25 @@ public void deflateEncodingSingleLargeMessage() throws Exception {
final Http2Headers headers =
new DefaultHttp2Headers().method(POST).path(PATH)
.set(CONTENT_ENCODING.toLowerCase(), DEFLATE);
+ final Http2Settings settings = new Http2Settings();
+ // Assume the compression operation will reduce the size by at least 10 bytes
+ settings.initialWindowSize(BUFFER_SIZE - 10);
+ runInChannel(serverConnectedChannel, new Http2Runnable() {
+ @Override
+ public void run() {
+ frameWriter.writeSettings(ctxServer(), settings, newPromiseServer());
+ ctxServer().flush();
+ }
+ });
+ awaitClient();
+
// Required because the decompressor intercepts the onXXXRead events before
// our {@link Http2TestUtil$FrameAdapter} does.
Http2TestUtil.FrameAdapter.getOrCreateStream(serverConnection, 3, false);
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
+ frameWriter.writeSettings(ctxClient(), settings, newPromiseClient());
frameWriter.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
frameWriter.writeData(ctxClient(), 3, encodedData, 0, true, newPromiseClient());
ctxClient().flush();
@@ -364,6 +381,10 @@ private void clientLatch(CountDownLatch latch) {
}
}
+ private void awaitClient() throws Exception {
+ clientLatch.await(5, SECONDS);
+ }
+
private void awaitServer() throws Exception {
serverLatch.await(5, SECONDS);
}
@@ -375,4 +396,12 @@ private ChannelHandlerContext ctxClient() {
private ChannelPromise newPromiseClient() {
return ctxClient().newPromise();
}
+
+ private ChannelHandlerContext ctxServer() {
+ return serverConnectedChannel.pipeline().firstContext();
+ }
+
+ private ChannelPromise newPromiseServer() {
+ return ctxServer().newPromise();
+ }
}
| test | train | 2014-09-21T16:02:40 | 2014-09-17T18:46:02Z | Scottmitch | val |
netty/netty/2971_2972 | netty/netty | netty/netty/2971 | netty/netty/2972 | [
"timestamp(timedelta=6595.0, similarity=0.8875179716498717)"
] | 276b826b59bfa4536f6ca336bbb70e94b2b559aa | 4990817430e31bba920e1d7db7a5f2b962b46cad | [
"Fix is provided in #2972 \n"
] | [
"Nit: Consider saving a reference to the `(FullHttpResponse) msg` here to avoid casting multiple times.\n",
"done\n"
] | 2014-10-06T22:11:38Z | [
"defect"
] | WebSocketClientProtocolHandshakeHandler leaks | The new unit test in #2970 did not found the issue it intended to check but therefore it still detected that there's a leak somewhere. It seems to be in the WebSocketClientProtocolHandshakeHandler which does never release the received HTTP handshake response. The log which shows the leak is attached below:
```
05:39:11.993 [main] ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected.
Recent access records: 1
#1:
Hint: 'io.netty.handler.codec.http.websocketx.WebSocketClientProtocolHandshakeHandler' will handle the message from this point.
io.netty.buffer.CompositeByteBuf.touch(CompositeByteBuf.java:1580)
io.netty.buffer.CompositeByteBuf.touch(CompositeByteBuf.java:40)
io.netty.buffer.DefaultByteBufHolder.touch(DefaultByteBufHolder.java:79)
io.netty.handler.codec.http.HttpObjectAggregator$AggregatedFullHttpMessage.touch(HttpObjectAggregator.java:243)
io.netty.handler.codec.http.HttpObjectAggregator$AggregatedFullHttpResponse.touch(HttpObjectAggregator.java:444)
io.netty.handler.codec.http.HttpObjectAggregator$AggregatedFullHttpResponse.touch(HttpObjectAggregator.java:363)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.embedded.EmbeddedEventLoop.invokeChannelRead(EmbeddedEventLoop.java:157)
io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:390)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:162)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.embedded.EmbeddedEventLoop.invokeChannelRead(EmbeddedEventLoop.java:157)
io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:390)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:896)
io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:176)
io.netty.handler.codec.http.websocketx.WebSocketHandshakeHandOverTest.transferAllWithMerge(WebSocketHandshakeHandOverTest.java:115)
io.netty.handler.codec.http.websocketx.WebSocketHandshakeHandOverTest.testHandover(WebSocketHandshakeHandOverTest.java:86)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Created at:
io.netty.buffer.CompositeByteBuf.<init>(CompositeByteBuf.java:59)
io.netty.buffer.Unpooled.compositeBuffer(Unpooled.java:355)
io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:241)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.embedded.EmbeddedEventLoop.invokeChannelRead(EmbeddedEventLoop.java:157)
io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:390)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:162)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.embedded.EmbeddedEventLoop.invokeChannelRead(EmbeddedEventLoop.java:157)
io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:390)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:896)
io.netty.channel.embedded.EmbeddedChannel.writeInbound(EmbeddedChannel.java:176)
io.netty.handler.codec.http.websocketx.WebSocketHandshakeHandOverTest.transferAllWithMerge(WebSocketHandshakeHandOverTest.java:115)
io.netty.handler.codec.http.websocketx.WebSocketHandshakeHandOverTest.testHandover(WebSocketHandshakeHandOverTest.java:86)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
```
| [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandshakeHandler.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandshakeHandler.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandshakeHandler.java b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandshakeHandler.java
index f9b8b40c314..bc832510819 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandshakeHandler.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocketClientProtocolHandshakeHandler.java
@@ -51,13 +51,18 @@ public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception
return;
}
- if (!handshaker.isHandshakeComplete()) {
- handshaker.finishHandshake(ctx.channel(), (FullHttpResponse) msg);
- ctx.fireUserEventTriggered(
- WebSocketClientProtocolHandler.ClientHandshakeStateEvent.HANDSHAKE_COMPLETE);
- ctx.pipeline().remove(this);
- return;
+ FullHttpResponse response = (FullHttpResponse) msg;
+ try {
+ if (!handshaker.isHandshakeComplete()) {
+ handshaker.finishHandshake(ctx.channel(), response);
+ ctx.fireUserEventTriggered(
+ WebSocketClientProtocolHandler.ClientHandshakeStateEvent.HANDSHAKE_COMPLETE);
+ ctx.pipeline().remove(this);
+ return;
+ }
+ throw new IllegalStateException("WebSocketClientHandshaker should have been non finished yet");
+ } finally {
+ response.release();
}
- throw new IllegalStateException("WebSocketClientHandshaker should have been non finished yet");
}
}
| null | val | train | 2014-10-05T04:12:49 | 2014-10-06T22:05:00Z | Matthias247 | val |
netty/netty/2930_2979 | netty/netty | netty/netty/2930 | netty/netty/2979 | [
"timestamp(timedelta=24.0, similarity=0.8579470291274585)"
] | 276b826b59bfa4536f6ca336bbb70e94b2b559aa | 1cb403f5d63abc43437bc827f16c2a28fbb0b901 | [
"Problems associated with this issue (if it is possible we are missing exceptions):\n1. Not setting promise object failure clauses (and thus not completing them)\n2. Leaking buffers (i.e. https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/AbstractHttp2ConnectionHandler... | [
"@normanmaurer - Can you weigh in on this. `isOpen()` seems to be lower in the state hierarchy...should we be using this method?\n",
"remote -> local\n",
"@nmittler - The `onException` method is calling `toHttp2Exception` which seems to do some duplicate work as the `getEmbeddedHttp2Exception` method called h... | 2014-10-08T21:18:43Z | [
"defect"
] | HTTP/2 AbstractHttp2ConnectionHandler exception catching | @nmittler - Is it possible that other exception types are thrown in methods `AbstractHttp2ConnectionHandler` is calling that we are not catching? Should we be catching a more general exception type than `Http2Exception`? For example maybe just `Exception` or `Throwable`? What if an unchecked exception is thrown somewhere?
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/co... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/co... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java",
"codec-http2/src/test/java... | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index 3dea034e1be..310e10a8b48 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -46,29 +46,78 @@ public class DefaultHttp2ConnectionDecoder implements Http2ConnectionDecoder {
private final Http2FrameListener listener;
private boolean prefaceReceived;
- public DefaultHttp2ConnectionDecoder(Http2Connection connection, Http2FrameReader frameReader,
- Http2InboundFlowController inboundFlow, Http2ConnectionEncoder encoder,
- Http2LifecycleManager lifecycleManager, Http2FrameListener listener) {
- this.connection = checkNotNull(connection, "connection");
- this.frameReader = checkNotNull(frameReader, "frameReader");
- this.lifecycleManager = checkNotNull(lifecycleManager, "lifecycleManager");
- this.encoder = checkNotNull(encoder, "encoder");
- this.inboundFlow = checkNotNull(inboundFlow, "inboundFlow");
- this.listener = checkNotNull(listener, "listener");
+ /**
+ * Builder for instances of {@link DefaultHttp2ConnectionDecoder}.
+ */
+ public static class Builder implements Http2ConnectionDecoder.Builder {
+ private Http2Connection connection;
+ private Http2LifecycleManager lifecycleManager;
+ private Http2ConnectionEncoder encoder;
+ private Http2FrameReader frameReader;
+ private Http2InboundFlowController inboundFlow;
+ private Http2FrameListener listener;
+
+ @Override
+ public Builder connection(Http2Connection connection) {
+ this.connection = connection;
+ return this;
+ }
+
+ @Override
+ public Builder lifecycleManager(Http2LifecycleManager lifecycleManager) {
+ this.lifecycleManager = lifecycleManager;
+ return this;
+ }
+
+ @Override
+ public Builder inboundFlow(Http2InboundFlowController inboundFlow) {
+ this.inboundFlow = inboundFlow;
+ return this;
+ }
+
+ @Override
+ public Builder frameReader(Http2FrameReader frameReader) {
+ this.frameReader = frameReader;
+ return this;
+ }
+
+ @Override
+ public Builder listener(Http2FrameListener listener) {
+ this.listener = listener;
+ return this;
+ }
+
+ @Override
+ public Builder encoder(Http2ConnectionEncoder encoder) {
+ this.encoder = encoder;
+ return this;
+ }
+
+ @Override
+ public Http2ConnectionDecoder build() {
+ return new DefaultHttp2ConnectionDecoder(this);
+ }
}
- public Http2Connection connection() {
- return connection;
+ public static Builder newBuilder() {
+ return new Builder();
}
- public Http2FrameListener listener() {
- return listener;
+ protected DefaultHttp2ConnectionDecoder(Builder builder) {
+ this.connection = checkNotNull(builder.connection, "connection");
+ this.frameReader = checkNotNull(builder.frameReader, "frameReader");
+ this.lifecycleManager = checkNotNull(builder.lifecycleManager, "lifecycleManager");
+ this.encoder = checkNotNull(builder.encoder, "encoder");
+ this.inboundFlow = checkNotNull(builder.inboundFlow, "inboundFlow");
+ this.listener = checkNotNull(builder.listener, "listener");
}
- public Http2LifecycleManager lifecycleManager() {
- return lifecycleManager;
+ @Override
+ public Http2Connection connection() {
+ return connection;
}
+ @Override
public boolean prefaceReceived() {
return prefaceReceived;
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index 9b0cad278c5..6815ebe36a1 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -15,7 +15,6 @@
package io.netty.handler.codec.http2;
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
-import static io.netty.handler.codec.http2.Http2CodecUtil.toHttp2Exception;
import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
import static io.netty.handler.codec.http2.Http2Exception.protocolError;
import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_REMOTE;
@@ -42,12 +41,68 @@ public class DefaultHttp2ConnectionEncoder implements Http2ConnectionEncoder {
// This initial capacity is plenty for SETTINGS traffic.
private final ArrayDeque<Http2Settings> outstandingLocalSettingsQueue = new ArrayDeque<Http2Settings>(4);
- public DefaultHttp2ConnectionEncoder(Http2Connection connection, Http2FrameWriter frameWriter,
- Http2OutboundFlowController outboundFlow, Http2LifecycleManager lifecycleManager) {
- this.frameWriter = checkNotNull(frameWriter, "frameWriter");
- this.connection = checkNotNull(connection, "connection");
- this.outboundFlow = checkNotNull(outboundFlow, "outboundFlow");
- this.lifecycleManager = checkNotNull(lifecycleManager, "lifecycleManager");
+ /**
+ * Builder for new instances of {@link DefaultHttp2ConnectionEncoder}.
+ */
+ public static class Builder implements Http2ConnectionEncoder.Builder {
+ protected Http2FrameWriter frameWriter;
+ protected Http2Connection connection;
+ protected Http2OutboundFlowController outboundFlow;
+ protected Http2LifecycleManager lifecycleManager;
+
+ @Override
+ public Builder connection(
+ Http2Connection connection) {
+ this.connection = connection;
+ return this;
+ }
+
+ @Override
+ public Builder lifecycleManager(
+ Http2LifecycleManager lifecycleManager) {
+ this.lifecycleManager = lifecycleManager;
+ return this;
+ }
+
+ @Override
+ public Builder frameWriter(
+ Http2FrameWriter frameWriter) {
+ this.frameWriter = frameWriter;
+ return this;
+ }
+
+ @Override
+ public Builder outboundFlow(
+ Http2OutboundFlowController outboundFlow) {
+ this.outboundFlow = outboundFlow;
+ return this;
+ }
+
+ @Override
+ public Http2ConnectionEncoder build() {
+ return new DefaultHttp2ConnectionEncoder(this);
+ }
+ }
+
+ public static Builder newBuilder() {
+ return new Builder();
+ }
+
+ protected DefaultHttp2ConnectionEncoder(Builder builder) {
+ this.frameWriter = checkNotNull(builder.frameWriter, "frameWriter");
+ this.connection = checkNotNull(builder.connection, "connection");
+ this.outboundFlow = checkNotNull(builder.outboundFlow, "outboundFlow");
+ this.lifecycleManager = checkNotNull(builder.lifecycleManager, "lifecycleManager");
+ }
+
+ @Override
+ public Http2FrameWriter frameWriter() {
+ return frameWriter;
+ }
+
+ @Override
+ public Http2Connection connection() {
+ return connection;
}
@Override
@@ -109,7 +164,7 @@ public ChannelFuture writeData(final ChannelHandlerContext ctx, final int stream
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
// The write failed, handle the error.
- lifecycleManager.onHttp2Exception(ctx, toHttp2Exception(future.cause()));
+ lifecycleManager.onException(ctx, future.cause());
} else if (endStream) {
// Close the local side of the stream if this is the last frame
Http2Stream stream = connection.stream(streamId);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index a22e8dc9df8..2b78e2e9583 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -15,7 +15,6 @@
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.http2.Http2Error.INTERNAL_ERROR;
import static io.netty.util.CharsetUtil.UTF_8;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
@@ -128,19 +127,6 @@ public static ChannelHandler ignoreSettingsHandler() {
return ignoreSettingsHandler;
}
- /**
- * Converts the given cause to a {@link Http2Exception} if it isn't already.
- */
- public static Http2Exception toHttp2Exception(Throwable cause) {
- // Look for an embedded Http2Exception.
- Http2Exception httpException = getEmbeddedHttp2Exception(cause);
- if (httpException != null) {
- return httpException;
- }
-
- return new Http2Exception(INTERNAL_ERROR, cause.getMessage(), cause);
- }
-
/**
* Iteratively looks through the causaility chain for the given exception and returns the first
* {@link Http2Exception} or {@code null} if none.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
index 899a5f6371d..a28da4fcdb4 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
@@ -25,6 +25,52 @@
*/
public interface Http2ConnectionDecoder extends Closeable {
+ /**
+ * Builder for new instances of {@link Http2ConnectionDecoder}.
+ */
+ public interface Builder {
+
+ /**
+ * Sets the {@link Http2Connection} to be used when building the decoder.
+ */
+ Builder connection(Http2Connection connection);
+
+ /**
+ * Sets the {@link LifecycleManager} to be used when building the decoder.
+ */
+ Builder lifecycleManager(Http2LifecycleManager lifecycleManager);
+
+ /**
+ * Sets the {@link Http2InboundFlowController} to be used when building the decoder.
+ */
+ Builder inboundFlow(Http2InboundFlowController inboundFlow);
+
+ /**
+ * Sets the {@link Http2FrameReader} to be used when building the decoder.
+ */
+ Builder frameReader(Http2FrameReader frameReader);
+
+ /**
+ * Sets the {@link Http2FrameListener} to be used when building the decoder.
+ */
+ Builder listener(Http2FrameListener listener);
+
+ /**
+ * Sets the {@link Http2ConnectionEncoder} used when building the decoder.
+ */
+ Builder encoder(Http2ConnectionEncoder encoder);
+
+ /**
+ * Creates a new decoder instance.
+ */
+ Http2ConnectionDecoder build();
+ }
+
+ /**
+ * Provides direct access to the underlying connection.
+ */
+ Http2Connection connection();
+
/**
* Called by the {@link Http2ConnectionHandler} to decode the next frame from the input buffer.
*/
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java
index e6a42045c8a..85b48fc1070 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java
@@ -14,11 +14,53 @@
*/
package io.netty.handler.codec.http2;
+
/**
* Handler for outbound traffic on behalf of {@link Http2ConectionHandler}.
*/
public interface Http2ConnectionEncoder extends Http2FrameWriter, Http2OutboundFlowController {
+ /**
+ * Builder for new instances of {@link Http2ConnectionEncoder}.
+ */
+ public interface Builder {
+
+ /**
+ * Sets the {@link Http2Connection} to be used when building the encoder.
+ */
+ Builder connection(Http2Connection connection);
+
+ /**
+ * Sets the {@link LifecycleManager} to be used when building the encoder.
+ */
+ Builder lifecycleManager(Http2LifecycleManager lifecycleManager);
+
+ /**
+ * Sets the {@link Http2FrameWriter} to be used when building the encoder.
+ */
+ Builder frameWriter(Http2FrameWriter frameWriter);
+
+ /**
+ * Sets the {@link Http2OutboundFlowController} to be used when building the encoder.
+ */
+ Builder outboundFlow(Http2OutboundFlowController outboundFlow);
+
+ /**
+ * Creates a new encoder instance.
+ */
+ Http2ConnectionEncoder build();
+ }
+
+ /**
+ * Provides direct access to the underlying connection.
+ */
+ Http2Connection connection();
+
+ /**
+ * Provides direct access to the underlying frame writer object.
+ */
+ Http2FrameWriter frameWriter();
+
/**
* Gets the local settings on the top of the queue that has been sent but not ACKed. This may
* return {@code null}.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 6f939eadac1..ce22ce2ffb2 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -17,6 +17,8 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_STREAM_ID;
import static io.netty.handler.codec.http2.Http2CodecUtil.connectionPrefaceBuf;
import static io.netty.handler.codec.http2.Http2CodecUtil.getEmbeddedHttp2Exception;
+import static io.netty.handler.codec.http2.Http2Error.INTERNAL_ERROR;
+import static io.netty.handler.codec.http2.Http2Error.NO_ERROR;
import static io.netty.handler.codec.http2.Http2Exception.protocolError;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
@@ -37,13 +39,12 @@
* <p>
* This interface enforces inbound flow control functionality through {@link Http2InboundFlowController}
*/
-public class Http2ConnectionHandler extends ByteToMessageDecoder {
- private final Http2LifecycleManager lifecycleManager;
+public class Http2ConnectionHandler extends ByteToMessageDecoder implements Http2LifecycleManager {
private final Http2ConnectionDecoder decoder;
private final Http2ConnectionEncoder encoder;
- private final Http2Connection connection;
private ByteBuf clientPrefaceString;
private boolean prefaceSent;
+ private ChannelFutureListener closeListener;
public Http2ConnectionHandler(boolean server, Http2FrameListener listener) {
this(new DefaultHttp2Connection(server), listener);
@@ -63,36 +64,47 @@ public Http2ConnectionHandler(Http2Connection connection, Http2FrameReader frame
public Http2ConnectionHandler(Http2Connection connection, Http2FrameReader frameReader,
Http2FrameWriter frameWriter, Http2InboundFlowController inboundFlow,
Http2OutboundFlowController outboundFlow, Http2FrameListener listener) {
- checkNotNull(frameWriter, "frameWriter");
- checkNotNull(inboundFlow, "inboundFlow");
- checkNotNull(outboundFlow, "outboundFlow");
- checkNotNull(listener, "listener");
- this.connection = checkNotNull(connection, "connection");
- this.lifecycleManager = new Http2LifecycleManager(connection, frameWriter);
this.encoder =
- new DefaultHttp2ConnectionEncoder(connection, frameWriter, outboundFlow,
- lifecycleManager);
+ DefaultHttp2ConnectionEncoder.newBuilder().connection(connection)
+ .frameWriter(frameWriter).outboundFlow(outboundFlow).lifecycleManager(this)
+ .build();
this.decoder =
- new DefaultHttp2ConnectionDecoder(connection, frameReader, inboundFlow, encoder,
- lifecycleManager, listener);
+ DefaultHttp2ConnectionDecoder.newBuilder().connection(connection)
+ .frameReader(frameReader).inboundFlow(inboundFlow).encoder(encoder)
+ .listener(listener).lifecycleManager(this).build();
clientPrefaceString = clientPrefaceString(connection);
}
- public Http2ConnectionHandler(Http2Connection connection, Http2ConnectionDecoder decoder,
- Http2ConnectionEncoder encoder, Http2LifecycleManager lifecycleManager) {
- this.connection = checkNotNull(connection, "connection");
- this.lifecycleManager = checkNotNull(lifecycleManager, "lifecycleManager");
- this.encoder = checkNotNull(encoder, "encoder");
- this.decoder = checkNotNull(decoder, "decoder");
- clientPrefaceString = clientPrefaceString(connection);
- }
+ /**
+ * Constructor for pre-configured encoder and decoder builders. Just sets the {@code this} as the
+ * {@link Http2LifecycleManager} and builds them.
+ */
+ public Http2ConnectionHandler(Http2ConnectionDecoder.Builder decoderBuilder,
+ Http2ConnectionEncoder.Builder encoderBuilder) {
+ checkNotNull(decoderBuilder, "decoderBuilder");
+ checkNotNull(encoderBuilder, "encoderBuilder");
+
+ // Build the encoder.
+ decoderBuilder.lifecycleManager(this);
+ encoder = checkNotNull(encoderBuilder.build(), "encoder");
+
+ // Build the decoder.
+ decoderBuilder.encoder(encoder);
+ encoderBuilder.lifecycleManager(this);
+ decoder = checkNotNull(decoderBuilder.build(), "decoder");
+
+ // Verify that the encoder and decoder use the same connection.
+ checkNotNull(encoder.connection(), "encoder.connection");
+ checkNotNull(decoder.connection(), "decoder.connection");
+ if (encoder.connection() != decoder.connection()) {
+ throw new IllegalArgumentException("Encoder and Decoder do not share the same connection object");
+ }
- public Http2Connection connection() {
- return connection;
+ clientPrefaceString = clientPrefaceString(encoder.connection());
}
- public Http2LifecycleManager lifecycleManager() {
- return lifecycleManager;
+ public Http2Connection connection() {
+ return encoder.connection();
}
public Http2ConnectionDecoder decoder() {
@@ -108,7 +120,7 @@ public Http2ConnectionEncoder encoder() {
* Reserves local stream 1 for the HTTP/2 response.
*/
public void onHttpClientUpgrade() throws Http2Exception {
- if (connection.isServer()) {
+ if (connection().isServer()) {
throw protocolError("Client-side HTTP upgrade requested for a server");
}
if (prefaceSent || decoder.prefaceReceived()) {
@@ -116,7 +128,7 @@ public void onHttpClientUpgrade() throws Http2Exception {
}
// Create a local stream used for the HTTP cleartext upgrade.
- connection.createLocalStream(HTTP_UPGRADE_STREAM_ID, true);
+ connection().createLocalStream(HTTP_UPGRADE_STREAM_ID, true);
}
/**
@@ -124,7 +136,7 @@ public void onHttpClientUpgrade() throws Http2Exception {
* @param settings the settings for the remote endpoint.
*/
public void onHttpServerUpgrade(Http2Settings settings) throws Http2Exception {
- if (!connection.isServer()) {
+ if (!connection().isServer()) {
throw protocolError("Server-side HTTP upgrade requested for a client");
}
if (prefaceSent || decoder.prefaceReceived()) {
@@ -135,7 +147,7 @@ public void onHttpServerUpgrade(Http2Settings settings) throws Http2Exception {
encoder.remoteSettings(settings);
// Create a stream in the half-closed state.
- connection.createRemoteStream(HTTP_UPGRADE_STREAM_ID, true);
+ connection().createRemoteStream(HTTP_UPGRADE_STREAM_ID, true);
}
@Override
@@ -160,15 +172,29 @@ protected void handlerRemoved0(ChannelHandlerContext ctx) throws Exception {
@Override
public void close(ChannelHandlerContext ctx, ChannelPromise promise) throws Exception {
- lifecycleManager.close(ctx, promise);
+ // Avoid NotYetConnectedException
+ if (!ctx.channel().isActive()) {
+ ctx.close(promise);
+ return;
+ }
+
+ ChannelFuture future = writeGoAway(ctx, null);
+
+ // If there are no active streams, close immediately after the send is complete.
+ // Otherwise wait until all streams are inactive.
+ if (connection().numActiveStreams() == 0) {
+ future.addListener(new ClosingChannelFutureListener(ctx, promise));
+ } else {
+ closeListener = new ClosingChannelFutureListener(ctx, promise);
+ }
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
ChannelFuture future = ctx.newSucceededFuture();
- final Collection<Http2Stream> streams = connection.activeStreams();
+ final Collection<Http2Stream> streams = connection().activeStreams();
for (Http2Stream s : streams.toArray(new Http2Stream[streams.size()])) {
- lifecycleManager.closeStream(s, future);
+ closeStream(s, future);
}
super.channelInactive(ctx);
}
@@ -178,14 +204,165 @@ public void channelInactive(ChannelHandlerContext ctx) throws Exception {
*/
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
- Http2Exception ex = getEmbeddedHttp2Exception(cause);
- if (ex != null) {
- lifecycleManager.onHttp2Exception(ctx, ex);
+ if (getEmbeddedHttp2Exception(cause) != null) {
+ // Some exception in the causality chain is an Http2Exception - handle it.
+ onException(ctx, cause);
} else {
super.exceptionCaught(ctx, cause);
}
}
+ /**
+ * Closes the local side of the given stream. If this causes the stream to be closed, adds a
+ * hook to close the channel after the given future completes.
+ *
+ * @param stream the stream to be half closed.
+ * @param future If closing, the future after which to close the channel.
+ */
+ public void closeLocalSide(Http2Stream stream, ChannelFuture future) {
+ switch (stream.state()) {
+ case HALF_CLOSED_LOCAL:
+ case OPEN:
+ stream.closeLocalSide();
+ break;
+ default:
+ closeStream(stream, future);
+ break;
+ }
+ }
+
+ /**
+ * Closes the remote side of the given stream. If this causes the stream to be closed, adds a
+ * hook to close the channel after the given future completes.
+ *
+ * @param stream the stream to be half closed.
+ * @param future If closing, the future after which to close the channel.
+ */
+ public void closeRemoteSide(Http2Stream stream, ChannelFuture future) {
+ switch (stream.state()) {
+ case HALF_CLOSED_REMOTE:
+ case OPEN:
+ stream.closeRemoteSide();
+ break;
+ default:
+ closeStream(stream, future);
+ break;
+ }
+ }
+
+ /**
+ * Closes the given stream and adds a hook to close the channel after the given future
+ * completes.
+ *
+ * @param stream the stream to be closed.
+ * @param future the future after which to close the channel.
+ */
+ @Override
+ public void closeStream(Http2Stream stream, ChannelFuture future) {
+ stream.close();
+
+ // If this connection is closing and there are no longer any
+ // active streams, close after the current operation completes.
+ if (closeListener != null && connection().numActiveStreams() == 0) {
+ future.addListener(closeListener);
+ }
+ }
+
+ /**
+ * Central handler for all exceptions caught during HTTP/2 processing.
+ */
+ @Override
+ public void onException(ChannelHandlerContext ctx, Throwable cause) {
+ Http2Exception embedded = getEmbeddedHttp2Exception(cause);
+ if (embedded instanceof Http2StreamException) {
+ onStreamError(ctx, cause, (Http2StreamException) embedded);
+ } else {
+ onConnectionError(ctx, cause, embedded);
+ }
+ }
+
+ /**
+ * Handler for a connection error. Sends a GO_AWAY frame to the remote endpoint. Once all
+ * streams are closed, the connection is shut down.
+ *
+ * @param ctx the channel context
+ * @param cause the exception that was caught
+ * @param http2Ex the {@link Http2Exception} that is embedded in the causality chain. This may
+ * be {@code null} if it's an unknown exception.
+ */
+ protected void onConnectionError(ChannelHandlerContext ctx, Throwable cause, Http2Exception http2Ex) {
+ if (http2Ex == null) {
+ http2Ex = new Http2Exception(INTERNAL_ERROR, cause.getMessage(), cause);
+ }
+ writeGoAway(ctx, http2Ex).addListener(new ClosingChannelFutureListener(ctx, ctx.newPromise()));
+ }
+
+ /**
+ * Handler for a stream error. Sends a {@code RST_STREAM} frame to the remote endpoint and closes the
+ * stream.
+ *
+ * @param ctx the channel context
+ * @param cause the exception that was caught
+ * @param http2Ex the {@link Http2StreamException} that is embedded in the causality chain.
+ */
+ protected void onStreamError(ChannelHandlerContext ctx, Throwable cause, Http2StreamException http2Ex) {
+ writeRstStream(ctx, http2Ex.streamId(), http2Ex.error().code(), ctx.newPromise());
+ }
+
+ protected Http2FrameWriter frameWriter() {
+ return encoder().frameWriter();
+ }
+
+ /**
+ * Writes a {@code RST_STREAM} frame to the remote endpoint and updates the connection state appropriately.
+ */
+ public ChannelFuture writeRstStream(ChannelHandlerContext ctx, int streamId, long errorCode,
+ ChannelPromise promise) {
+ Http2Stream stream = connection().stream(streamId);
+ ChannelFuture future = frameWriter().writeRstStream(ctx, streamId, errorCode, promise);
+ ctx.flush();
+
+ if (stream != null) {
+ stream.terminateSent();
+ closeStream(stream, promise);
+ }
+
+ return future;
+ }
+
+ /**
+ * Sends a {@code GO_AWAY} frame to the remote endpoint and updates the connection state appropriately.
+ */
+ public ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, long errorCode, ByteBuf debugData,
+ ChannelPromise promise) {
+ if (connection().isGoAway()) {
+ debugData.release();
+ return ctx.newSucceededFuture();
+ }
+
+ ChannelFuture future = frameWriter().writeGoAway(ctx, lastStreamId, errorCode, debugData, promise);
+ ctx.flush();
+
+ connection().remote().goAwayReceived(lastStreamId);
+ return future;
+ }
+
+ /**
+ * Sends a {@code GO_AWAY} frame appropriate for the given exception.
+ */
+ private ChannelFuture writeGoAway(ChannelHandlerContext ctx, Http2Exception cause) {
+ if (connection().isGoAway()) {
+ return ctx.newSucceededFuture();
+ }
+
+ // The connection isn't alredy going away, send the GO_AWAY frame now to start
+ // the process.
+ int errorCode = cause != null ? cause.error().code() : NO_ERROR.code();
+ ByteBuf debugData = Http2CodecUtil.toByteBuf(ctx, cause);
+ int lastKnownStream = connection().remote().lastStreamCreated();
+ return writeGoAway(ctx, lastKnownStream, errorCode, debugData, ctx.newPromise());
+ }
+
@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
try {
@@ -197,10 +374,8 @@ protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) t
}
decoder.decodeFrame(ctx, in, out);
- } catch (Http2Exception e) {
- lifecycleManager.onHttp2Exception(ctx, e);
} catch (Throwable e) {
- lifecycleManager.onHttp2Exception(ctx, new Http2Exception(Http2Error.INTERNAL_ERROR, e.getMessage(), e));
+ onException(ctx, e);
}
}
@@ -214,7 +389,7 @@ private void sendPreface(final ChannelHandlerContext ctx) {
prefaceSent = true;
- if (!connection.isServer()) {
+ if (!connection().isServer()) {
// Clients must send the preface string as the first bytes on the connection.
ctx.write(connectionPrefaceBuf()).addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
}
@@ -276,4 +451,22 @@ private boolean readClientPrefaceString(ChannelHandlerContext ctx, ByteBuf in) t
private static ByteBuf clientPrefaceString(Http2Connection connection) {
return connection.isServer() ? connectionPrefaceBuf() : null;
}
+
+ /**
+ * Closes the channel when the future completes.
+ */
+ private static final class ClosingChannelFutureListener implements ChannelFutureListener {
+ private final ChannelHandlerContext ctx;
+ private final ChannelPromise promise;
+
+ ClosingChannelFutureListener(ChannelHandlerContext ctx, ChannelPromise promise) {
+ this.ctx = ctx;
+ this.promise = promise;
+ }
+
+ @Override
+ public void operationComplete(ChannelFuture sentGoAwayFuture) throws Exception {
+ ctx.close(promise);
+ }
+ }
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
index d0eeef66344..ff4cd59aa88 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
@@ -14,11 +14,8 @@
*/
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.http2.Http2Error.NO_ERROR;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelFuture;
-import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
@@ -26,37 +23,7 @@
* Manager for the life cycle of the HTTP/2 connection. Handles graceful shutdown of the channel,
* closing only after all of the streams have closed.
*/
-public class Http2LifecycleManager {
-
- private final Http2Connection connection;
- private final Http2FrameWriter frameWriter;
- private ChannelFutureListener closeListener;
-
- public Http2LifecycleManager(Http2Connection connection, Http2FrameWriter frameWriter) {
- this.connection = checkNotNull(connection, "connection");
- this.frameWriter = checkNotNull(frameWriter, "frameWriter");
- }
-
- /**
- * Handles the close processing on behalf of the {@link DelegatingHttp2ConnectionHandler}.
- */
- public void close(ChannelHandlerContext ctx, ChannelPromise promise) throws Exception {
- // Avoid NotYetConnectedException
- if (!ctx.channel().isActive()) {
- ctx.close(promise);
- return;
- }
-
- ChannelFuture future = writeGoAway(ctx, null);
-
- // If there are no active streams, close immediately after the send is complete.
- // Otherwise wait until all streams are inactive.
- if (connection.numActiveStreams() == 0) {
- future.addListener(new ClosingChannelFutureListener(ctx, promise));
- } else {
- closeListener = new ClosingChannelFutureListener(ctx, promise);
- }
- }
+public interface Http2LifecycleManager {
/**
* Closes the remote side of the given stream. If this causes the stream to be closed, adds a
@@ -65,17 +32,7 @@ public void close(ChannelHandlerContext ctx, ChannelPromise promise) throws Exce
* @param stream the stream to be half closed.
* @param future If closing, the future after which to close the channel.
*/
- public void closeLocalSide(Http2Stream stream, ChannelFuture future) {
- switch (stream.state()) {
- case HALF_CLOSED_LOCAL:
- case OPEN:
- stream.closeLocalSide();
- break;
- default:
- closeStream(stream, future);
- break;
- }
- }
+ void closeLocalSide(Http2Stream stream, ChannelFuture future);
/**
* Closes the remote side of the given stream. If this causes the stream to be closed, adds a
@@ -84,17 +41,7 @@ public void closeLocalSide(Http2Stream stream, ChannelFuture future) {
* @param stream the stream to be half closed.
* @param future If closing, the future after which to close the channel.
*/
- public void closeRemoteSide(Http2Stream stream, ChannelFuture future) {
- switch (stream.state()) {
- case HALF_CLOSED_REMOTE:
- case OPEN:
- stream.closeRemoteSide();
- break;
- default:
- closeStream(stream, future);
- break;
- }
- }
+ void closeRemoteSide(Http2Stream stream, ChannelFuture future);
/**
* Closes the given stream and adds a hook to close the channel after the given future
@@ -103,109 +50,24 @@ public void closeRemoteSide(Http2Stream stream, ChannelFuture future) {
* @param stream the stream to be closed.
* @param future the future after which to close the channel.
*/
- public void closeStream(Http2Stream stream, ChannelFuture future) {
- stream.close();
-
- // If this connection is closing and there are no longer any
- // active streams, close after the current operation completes.
- if (closeListener != null && connection.numActiveStreams() == 0) {
- future.addListener(closeListener);
- }
- }
-
- /**
- * Processes the given exception. Depending on the type of exception, delegates to either
- * {@link #onConnectionError(ChannelHandlerContext, Http2Exception)} or
- * {@link #onStreamError(ChannelHandlerContext, Http2StreamException)}.
- */
- public void onHttp2Exception(ChannelHandlerContext ctx, Http2Exception e) {
- if (e instanceof Http2StreamException) {
- onStreamError(ctx, (Http2StreamException) e);
- } else {
- onConnectionError(ctx, e);
- }
- }
+ void closeStream(Http2Stream stream, ChannelFuture future);
/**
- * Handler for a connection error. Sends a GO_AWAY frame to the remote endpoint and waits until
- * all streams are closed before shutting down the connection.
+ * Writes a {@code RST_STREAM} frame to the remote endpoint and updates the connection state
+ * appropriately.
*/
- private void onConnectionError(ChannelHandlerContext ctx, Http2Exception cause) {
- writeGoAway(ctx, cause).addListener(new ClosingChannelFutureListener(ctx, ctx.newPromise()));
- }
+ ChannelFuture writeRstStream(ChannelHandlerContext ctx, int streamId, long errorCode,
+ ChannelPromise promise);
/**
- * Handler for a stream error. Sends a RST_STREAM frame to the remote endpoint and closes the stream.
+ * Sends a {@code GO_AWAY} frame to the remote endpoint and updates the connection state
+ * appropriately.
*/
- private void onStreamError(ChannelHandlerContext ctx, Http2StreamException cause) {
- writeRstStream(ctx, cause.streamId(), cause.error().code(), ctx.newPromise());
- }
+ ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, long errorCode,
+ ByteBuf debugData, ChannelPromise promise);
/**
- * Writes a RST_STREAM frame to the remote endpoint and updates the connection state appropriately.
+ * Processes the given exception.
*/
- public ChannelFuture writeRstStream(ChannelHandlerContext ctx, int streamId, long errorCode,
- ChannelPromise promise) {
- Http2Stream stream = connection.stream(streamId);
- ChannelFuture future = frameWriter.writeRstStream(ctx, streamId, errorCode, promise);
- ctx.flush();
-
- if (stream != null) {
- stream.terminateSent();
- closeStream(stream, promise);
- }
-
- return future;
- }
-
- /**
- * Sends a {@code GO_AWAY} frame to the remote endpoint and updates the connection state appropriately.
- */
- public ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, long errorCode, ByteBuf debugData,
- ChannelPromise promise) {
- if (connection.isGoAway()) {
- debugData.release();
- return ctx.newSucceededFuture();
- }
-
- ChannelFuture future = frameWriter.writeGoAway(ctx, lastStreamId, errorCode, debugData, promise);
- ctx.flush();
-
- connection.remote().goAwayReceived(lastStreamId);
- return future;
- }
-
- /**
- * Sends a GO_AWAY frame appropriate for the given exception.
- */
- private ChannelFuture writeGoAway(ChannelHandlerContext ctx, Http2Exception cause) {
- if (connection.isGoAway()) {
- return ctx.newSucceededFuture();
- }
-
- // The connection isn't alredy going away, send the GO_AWAY frame now to start
- // the process.
- int errorCode = cause != null ? cause.error().code() : NO_ERROR.code();
- ByteBuf debugData = Http2CodecUtil.toByteBuf(ctx, cause);
- int lastKnownStream = connection.remote().lastStreamCreated();
- return writeGoAway(ctx, lastKnownStream, errorCode, debugData, ctx.newPromise());
- }
-
- /**
- * Closes the channel when the future completes.
- */
- private static final class ClosingChannelFutureListener implements ChannelFutureListener {
- private final ChannelHandlerContext ctx;
- private final ChannelPromise promise;
-
- ClosingChannelFutureListener(ChannelHandlerContext ctx, ChannelPromise promise) {
- this.ctx = ctx;
- this.promise = promise;
- }
-
- @Override
- public void operationComplete(ChannelFuture sentGoAwayFuture) throws Exception {
- ctx.close(promise);
- }
- }
+ void onException(ChannelHandlerContext ctx, Throwable cause);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamException.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamException.java
index ae5f1ad1854..03efb25f766 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamException.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamException.java
@@ -25,6 +25,11 @@ public Http2StreamException(int streamId, Http2Error error, String message) {
this.streamId = streamId;
}
+ public Http2StreamException(int streamId, Http2Error error, String message, Throwable cause) {
+ super(error, message, cause);
+ this.streamId = streamId;
+ }
+
public Http2StreamException(int streamId, Http2Error error) {
super(error);
this.streamId = streamId;
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index d9c671b5c9b..9de6880a0e7 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -61,7 +61,7 @@ public class DefaultHttp2ConnectionDecoderTest {
private static final int STREAM_ID = 1;
private static final int PUSH_STREAM_ID = 2;
- private DefaultHttp2ConnectionDecoder decoder;
+ private Http2ConnectionDecoder decoder;
@Mock
private Http2Connection connection;
@@ -143,9 +143,9 @@ public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
when(ctx.newPromise()).thenReturn(promise);
when(ctx.write(any())).thenReturn(future);
- decoder =
- new DefaultHttp2ConnectionDecoder(connection, reader, inboundFlow, encoder,
- lifecycleManager, listener);
+ decoder = DefaultHttp2ConnectionDecoder.newBuilder().connection(connection)
+ .frameReader(reader).inboundFlow(inboundFlow).encoder(encoder)
+ .listener(listener).lifecycleManager(lifecycleManager).build();
// Simulate receiving the initial settings from the remote endpoint.
decode().onSettingsRead(ctx, new Http2Settings());
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
index 8c6cd33e524..a4b3097ff73 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
@@ -60,7 +60,7 @@ public class DefaultHttp2ConnectionEncoderTest {
private static final int STREAM_ID = 1;
private static final int PUSH_STREAM_ID = 2;
- private DefaultHttp2ConnectionEncoder encoder;
+ private Http2ConnectionEncoder encoder;
@Mock
private Http2Connection connection;
@@ -143,7 +143,9 @@ public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
when(ctx.newPromise()).thenReturn(promise);
when(ctx.write(any())).thenReturn(future);
- encoder = new DefaultHttp2ConnectionEncoder(connection, writer, outboundFlow, lifecycleManager);
+ encoder = DefaultHttp2ConnectionEncoder.newBuilder().connection(connection)
+ .frameWriter(writer).outboundFlow(outboundFlow)
+ .lifecycleManager(lifecycleManager).build();
}
@Test
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
index 6ac91e074d0..e6c749eee73 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
@@ -41,6 +41,7 @@
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
+import org.mockito.ArgumentCaptor;
import org.mockito.Matchers;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
@@ -79,7 +80,10 @@ public class Http2ConnectionHandlerTest {
private Http2Stream stream;
@Mock
- private Http2LifecycleManager lifecycleManager;
+ private Http2ConnectionDecoder.Builder decoderBuilder;
+
+ @Mock
+ private Http2ConnectionEncoder.Builder encoderBuilder;
@Mock
private Http2ConnectionDecoder decoder;
@@ -87,13 +91,24 @@ public class Http2ConnectionHandlerTest {
@Mock
private Http2ConnectionEncoder encoder;
+ @Mock
+ private Http2FrameWriter frameWriter;
+
@Before
public void setup() throws Exception {
MockitoAnnotations.initMocks(this);
promise = new DefaultChannelPromise(channel);
+ when(encoderBuilder.build()).thenReturn(encoder);
+ when(decoderBuilder.build()).thenReturn(decoder);
+ when(encoder.connection()).thenReturn(connection);
+ when(decoder.connection()).thenReturn(connection);
+ when(encoder.frameWriter()).thenReturn(frameWriter);
+ when(frameWriter.writeGoAway(eq(ctx), anyInt(), anyInt(), any(ByteBuf.class), eq(promise))).thenReturn(future);
when(channel.isActive()).thenReturn(true);
+ when(connection.remote()).thenReturn(remote);
+ when(connection.local()).thenReturn(local);
when(connection.activeStreams()).thenReturn(Collections.singletonList(stream));
doAnswer(new Answer<Http2Stream>() {
@Override
@@ -120,7 +135,7 @@ public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
}
private Http2ConnectionHandler newHandler() {
- return new Http2ConnectionHandler(connection, decoder, encoder, lifecycleManager);
+ return new Http2ConnectionHandler(decoderBuilder, encoderBuilder);
}
@After
@@ -147,7 +162,10 @@ public void serverReceivingInvalidClientPrefaceStringShouldHandleException() thr
when(connection.isServer()).thenReturn(true);
handler = newHandler();
handler.channelRead(ctx, copiedBuffer("BAD_PREFACE", UTF_8));
- verify(lifecycleManager).onHttp2Exception(eq(ctx), any(Http2Exception.class));
+ ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
+ verify(frameWriter).writeGoAway(eq(ctx), eq(0), eq((long) PROTOCOL_ERROR.code()),
+ captor.capture(), eq(promise));
+ captor.getValue().release();
}
@Test
@@ -157,23 +175,20 @@ public void serverReceivingValidClientPrefaceStringShouldContinueReadingFrames()
verify(decoder).decodeFrame(eq(ctx), any(ByteBuf.class), Matchers.<List<Object>>any());
}
- @Test
- public void closeShouldCallLifecycleManager() throws Exception {
- handler.close(ctx, promise);
- verify(lifecycleManager).close(eq(ctx), eq(promise));
- }
-
@Test
public void channelInactiveShouldCloseStreams() throws Exception {
handler.channelInactive(ctx);
- verify(lifecycleManager).closeStream(eq(stream), eq(future));
+ verify(stream).close();
}
@Test
- public void http2ExceptionShouldCallLifecycleManager() throws Exception {
+ public void connectionErrorShouldStartShutdown() throws Exception {
Http2Exception e = new Http2Exception(PROTOCOL_ERROR);
when(remote.lastStreamCreated()).thenReturn(STREAM_ID);
handler.exceptionCaught(ctx, e);
- verify(lifecycleManager).onHttp2Exception(eq(ctx), eq(e));
+ ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
+ verify(frameWriter).writeGoAway(eq(ctx), eq(STREAM_ID), eq((long) PROTOCOL_ERROR.code()),
+ captor.capture(), eq(promise));
+ captor.getValue().release();
}
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
index e89a699b075..6aaf1682938 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
@@ -28,6 +28,7 @@
import static org.mockito.Matchers.anyInt;
import static org.mockito.Matchers.eq;
import static org.mockito.Mockito.doAnswer;
+import static org.mockito.Mockito.doThrow;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import io.netty.bootstrap.Bootstrap;
@@ -142,6 +143,41 @@ public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
assertFalse(clientChannel.isOpen());
}
+ @Test
+ public void listenerExceptionShouldCloseConnection() throws Exception {
+ final Http2Headers headers = dummyHeaders();
+ doThrow(new RuntimeException("Fake Exception")).when(serverListener).onHeadersRead(
+ any(ChannelHandlerContext.class), eq(3), eq(headers), eq(0), eq((short) 16),
+ eq(false), eq(0), eq(false));
+
+ bootstrapEnv(1, 1);
+
+ // Create a latch to track when the close occurs.
+ final CountDownLatch closeLatch = new CountDownLatch(1);
+ clientChannel.closeFuture().addListener(new ChannelFutureListener() {
+ @Override
+ public void operationComplete(ChannelFuture future) throws Exception {
+ closeLatch.countDown();
+ }
+ });
+
+ // Create a single stream by sending a HEADERS frame to the server.
+ runInChannel(clientChannel, new Http2Runnable() {
+ @Override
+ public void run() {
+ http2Client.encoder().writeHeaders(ctx(), 3, headers, 0, (short) 16, false, 0, false,
+ newPromise());
+ }
+ });
+
+ // Wait for the server to create the stream.
+ assertTrue(requestLatch.await(5, TimeUnit.SECONDS));
+
+ // Wait for the close to occur.
+ assertTrue(closeLatch.await(5, TimeUnit.SECONDS));
+ assertFalse(clientChannel.isOpen());
+ }
+
@Test
public void nonHttp2ExceptionInPipelineShouldNotCloseConnection() throws Exception {
bootstrapEnv(1, 1);
| test | train | 2014-10-05T04:12:49 | 2014-09-23T02:19:36Z | Scottmitch | val |
netty/netty/2986_2991 | netty/netty | netty/netty/2986 | netty/netty/2991 | [
"timestamp(timedelta=119735.0, similarity=0.9411294219327737)"
] | ce817e0d309c69e15d7773c52468838a6e6849b2 | 1db5b4b63005e443f2e26c26190781b1ca6b8d08 | [
"@nmittler I think @normanmaurer cherry-picked and closed your PR. Can this be closed?\n",
"I think so :)\n\n> Am 14.10.2014 um 22:39 schrieb Scott Mitchell notifications@github.com:\n> \n> @nmittler I think @normanmaurer cherry-picked and closed your PR. Can this be closed?\n> \n> —\n> Reply to this email direc... | [] | 2014-10-09T23:15:31Z | [
"cleanup"
] | Cleanup HTTP/2 GO_AWAY methods | Right now the determination of whether GO_AWAY was sent or received is a little confusing since each endpoint has a goAwayReceived() method. This means that to determine if GO_AWAY was sent we would have to call connection.remote().goAwayReceived() which is a little clumsy and strange.
It would probably be better to just move back the GO_AWAY methods to 1st class citizens of the connection, so we'd have goAwaySent() and goAwayRecieved() ... this should make it much more clear going forward.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/ht... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/ht... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index c69cc5bec21..043c2b8d9d1 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -149,7 +149,7 @@ public Endpoint remote() {
@Override
public boolean isGoAway() {
- return localEndpoint.isGoAwayReceived() || remoteEndpoint.isGoAwayReceived();
+ return goAwaySent() || goAwayReceived();
}
@Override
@@ -162,6 +162,26 @@ public Http2Stream createRemoteStream(int streamId, boolean halfClosed) throws H
return remote().createStream(streamId, halfClosed);
}
+ @Override
+ public boolean goAwayReceived() {
+ return localEndpoint.lastKnownStream >= 0;
+ }
+
+ @Override
+ public void goAwayReceived(int lastKnownStream) {
+ localEndpoint.lastKnownStream(lastKnownStream);
+ }
+
+ @Override
+ public boolean goAwaySent() {
+ return remoteEndpoint.lastKnownStream >= 0;
+ }
+
+ @Override
+ public void goAwaySent(int lastKnownStream) {
+ remoteEndpoint.lastKnownStream(lastKnownStream);
+ }
+
private void removeStream(DefaultStream stream) {
// Notify the listeners of the event first.
for (Listener listener : listeners) {
@@ -791,16 +811,10 @@ public int lastStreamCreated() {
@Override
public int lastKnownStream() {
- return isGoAwayReceived() ? lastKnownStream : lastStreamCreated;
+ return lastKnownStream >= 0 ? lastKnownStream : lastStreamCreated;
}
- @Override
- public boolean isGoAwayReceived() {
- return lastKnownStream >= 0;
- }
-
- @Override
- public void goAwayReceived(int lastKnownStream) {
+ private void lastKnownStream(int lastKnownStream) {
boolean alreadyNotified = isGoAway();
this.lastKnownStream = lastKnownStream;
if (!alreadyNotified) {
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index ee4c0da8acd..d27fc6dc5fe 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -249,7 +249,7 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
Http2Stream stream = connection.stream(streamId);
verifyGoAwayNotReceived();
verifyRstStreamNotReceived(stream);
- if (connection.remote().isGoAwayReceived() || stream != null && shouldIgnoreFrame(stream)) {
+ if (connection.goAwaySent() || stream != null && shouldIgnoreFrame(stream)) {
// Ignore this frame.
return;
}
@@ -426,7 +426,7 @@ public void onPushPromiseRead(ChannelHandlerContext ctx, int streamId, int promi
public void onGoAwayRead(ChannelHandlerContext ctx, int lastStreamId, long errorCode, ByteBuf debugData)
throws Http2Exception {
// Don't allow any more connections to be created.
- connection.local().goAwayReceived(lastStreamId);
+ connection.goAwayReceived(lastStreamId);
listener.onGoAwayRead(ctx, lastStreamId, errorCode, debugData);
}
@@ -461,7 +461,7 @@ public void onUnknownFrame(ChannelHandlerContext ctx, byte frameType, int stream
* stream/connection.
*/
private boolean shouldIgnoreFrame(Http2Stream stream) {
- if (connection.remote().isGoAwayReceived() && connection.remote().lastStreamCreated() <= stream.id()) {
+ if (connection.goAwaySent() && connection.remote().lastStreamCreated() <= stream.id()) {
// Frames from streams created after we sent a go-away should be ignored.
// Frames for the connection stream ID (i.e. 0) will always be allowed.
return true;
@@ -476,7 +476,7 @@ private boolean shouldIgnoreFrame(Http2Stream stream) {
* exception.
*/
private void verifyGoAwayNotReceived() throws Http2Exception {
- if (connection.local().isGoAwayReceived()) {
+ if (connection.goAwayReceived()) {
throw protocolError("Received frames after receiving GO_AWAY");
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
index 8d77c3ad132..45985304db8 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
@@ -186,17 +186,6 @@ interface Endpoint {
*/
int lastKnownStream();
- /**
- * Indicates whether or not a GOAWAY was received by this endpoint.
- */
- boolean isGoAwayReceived();
-
- /**
- * Indicates that a GOAWAY was received from the opposite endpoint and sets the last known stream
- * created by this endpoint.
- */
- void goAwayReceived(int lastKnownStream);
-
/**
* Gets the {@link Endpoint} opposite this one.
*/
@@ -265,6 +254,26 @@ interface Endpoint {
*/
Http2Stream createRemoteStream(int streamId, boolean halfClosed) throws Http2Exception;
+ /**
+ * Indicates whether or not a {@code GOAWAY} was received from the remote endpoint.
+ */
+ boolean goAwayReceived();
+
+ /**
+ * Indicates that a {@code GOAWAY} was received from the remote endpoint and sets the last known stream.
+ */
+ void goAwayReceived(int lastKnownStream);
+
+ /**
+ * Indicates whether or not a {@code GOAWAY} was sent to the remote endpoint.
+ */
+ boolean goAwaySent();
+
+ /**
+ * Indicates that a {@code GOAWAY} was sent to the remote endpoint and sets the last known stream.
+ */
+ void goAwaySent(int lastKnownStream);
+
/**
* Indicates whether or not either endpoint has received a GOAWAY.
*/
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index ce22ce2ffb2..3bfa9aa769b 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -343,7 +343,7 @@ public ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, lo
ChannelFuture future = frameWriter().writeGoAway(ctx, lastStreamId, errorCode, debugData, promise);
ctx.flush();
- connection().remote().goAwayReceived(lastStreamId);
+ connection().goAwaySent(lastStreamId);
return future;
}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index 9de6880a0e7..b660542aadf 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -159,7 +159,7 @@ public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
@Test
public void dataReadAfterGoAwayShouldApplyFlowControl() throws Exception {
- when(remote.isGoAwayReceived()).thenReturn(true);
+ when(connection.goAwaySent()).thenReturn(true);
final ByteBuf data = dummyData();
try {
decode().onDataRead(ctx, STREAM_ID, data, 10, true);
@@ -187,7 +187,7 @@ public void dataReadWithEndOfStreamShouldCloseRemoteSide() throws Exception {
@Test
public void headersReadAfterGoAwayShouldBeIgnored() throws Exception {
- when(remote.isGoAwayReceived()).thenReturn(true);
+ when(connection.goAwaySent()).thenReturn(true);
decode().onHeadersRead(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, 0, false);
verify(remote, never()).createStream(eq(STREAM_ID), eq(false));
@@ -235,7 +235,7 @@ public void headersReadForPromisedStreamShouldCloseStream() throws Exception {
@Test
public void pushPromiseReadAfterGoAwayShouldBeIgnored() throws Exception {
- when(remote.isGoAwayReceived()).thenReturn(true);
+ when(connection.goAwaySent()).thenReturn(true);
decode().onPushPromiseRead(ctx, STREAM_ID, PUSH_STREAM_ID, EmptyHttp2Headers.INSTANCE, 0);
verify(remote, never()).reservePushStream(anyInt(), any(Http2Stream.class));
verify(listener, never()).onPushPromiseRead(eq(ctx), anyInt(), anyInt(), any(Http2Headers.class), anyInt());
@@ -251,7 +251,7 @@ public void pushPromiseReadShouldSucceed() throws Exception {
@Test
public void priorityReadAfterGoAwayShouldBeIgnored() throws Exception {
- when(remote.isGoAwayReceived()).thenReturn(true);
+ when(connection.goAwaySent()).thenReturn(true);
decode().onPriorityRead(ctx, STREAM_ID, 0, (short) 255, true);
verify(stream, never()).setPriority(anyInt(), anyShort(), anyBoolean());
verify(listener, never()).onPriorityRead(eq(ctx), anyInt(), anyInt(), anyShort(), anyBoolean());
@@ -266,7 +266,7 @@ public void priorityReadShouldSucceed() throws Exception {
@Test
public void windowUpdateReadAfterGoAwayShouldBeIgnored() throws Exception {
- when(remote.isGoAwayReceived()).thenReturn(true);
+ when(connection.goAwaySent()).thenReturn(true);
decode().onWindowUpdateRead(ctx, STREAM_ID, 10);
verify(encoder, never()).updateOutboundWindowSize(anyInt(), anyInt());
verify(listener, never()).onWindowUpdateRead(eq(ctx), anyInt(), anyInt());
@@ -287,7 +287,7 @@ public void windowUpdateReadShouldSucceed() throws Exception {
@Test
public void rstStreamReadAfterGoAwayShouldSucceed() throws Exception {
- when(remote.isGoAwayReceived()).thenReturn(true);
+ when(connection.goAwaySent()).thenReturn(true);
decode().onRstStreamRead(ctx, STREAM_ID, PROTOCOL_ERROR.code());
verify(lifecycleManager).closeStream(eq(stream), eq(future));
verify(listener).onRstStreamRead(eq(ctx), anyInt(), anyLong());
@@ -342,7 +342,7 @@ public void settingsReadShouldSetValues() throws Exception {
@Test
public void goAwayShouldReadShouldUpdateConnectionState() throws Exception {
decode().onGoAwayRead(ctx, 1, 2L, EMPTY_BUFFER);
- verify(local).goAwayReceived(1);
+ verify(connection).goAwayReceived(1);
verify(listener).onGoAwayRead(eq(ctx), eq(1), eq(2L), eq(EMPTY_BUFFER));
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index 46596cd3f96..e64544591b1 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -174,7 +174,7 @@ public void reserveWithPushDisallowedShouldThrow() throws Http2Exception {
@Test(expected = Http2Exception.class)
public void goAwayReceivedShouldDisallowCreation() throws Http2Exception {
- server.local().goAwayReceived(0);
+ server.goAwayReceived(0);
server.remote().createStream(3, true);
}
| train | train | 2014-10-10T00:13:57 | 2014-10-09T16:18:29Z | nmittler | val |
netty/netty/2998_3000 | netty/netty | netty/netty/2998 | netty/netty/3000 | [
"timestamp(timedelta=115825.0, similarity=0.9901557196938808)"
] | ce817e0d309c69e15d7773c52468838a6e6849b2 | 74109c0b5959a9e92506506b24878ffc7afd9999 | [
"Now I'm thinking this is an issue because our netty client example reliably times out in this case. The server (netty server example) prints that it sent everything, but the client hangs after processing the headers.\n\n**Server**\n\n``` bash\nSelected Protocol is HTTP_2\nOct 10, 2014 3:57:45 PM io.netty.handler.... | [] | 2014-10-10T22:36:01Z | [
"defect"
] | HTTP/2 server example not using flow controller | @nmittler - I'd like to get your input on the following. I have a non-netty server communicating with a netty client. All is well on this front ;). However I am trying to interact with my server from curl. I am having issues getting curl to read all data for multiple frames from my server and so naturally I tried netty's example server and I am seeing the same issue. I am also seeing the same behavior in firefox nightly (client appears to be waiting for something to happen and never fully reads the data). I figured I would discuss with you before getting support from the client perspective (seeing as these are 2 mainstream clients).
Greater than 16384 seems to be the magic number (see code below) which gets these clients into this state. I think this number is special because it is the maximum TLS plaintext size per unit (see here https://github.com/openssl/openssl/blob/f47e203975133ddbae3cde20c8c3c0516f62066c/ssl/ssl3.h#L303). Is there anything we should be doing special from a server point of view here?
I modified the `HelloWorldHttp2Handler.java` to observe this issue:
``` java
private void sendResponse(ChannelHandlerContext ctx, int streamId, ByteBuf payload) {
// Send a frame for the response status
Http2Headers headers = new DefaultHttp2Headers().status(new AsciiString("200"));
payload.release();
final int len = 16384 + 1;
payload = buffer(len);
for (int i = 0; i < len; ++i) {
payload.writeByte((byte) 'a');
}
frameWriter.writeHeaders(ctx, streamId, headers, 0, false, ctx.newPromise());
frameWriter.writeData(ctx, streamId, payload, 0, true, ctx.newPromise());
ctx.flush();
}
```
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 7d53e3cd093..19575ec2726 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -64,15 +64,10 @@ public Http2ConnectionHandler(Http2Connection connection, Http2FrameReader frame
public Http2ConnectionHandler(Http2Connection connection, Http2FrameReader frameReader,
Http2FrameWriter frameWriter, Http2InboundFlowController inboundFlow,
Http2OutboundFlowController outboundFlow, Http2FrameListener listener) {
- this.encoder =
- DefaultHttp2ConnectionEncoder.newBuilder().connection(connection)
- .frameWriter(frameWriter).outboundFlow(outboundFlow).lifecycleManager(this)
- .build();
- this.decoder =
- DefaultHttp2ConnectionDecoder.newBuilder().connection(connection)
- .frameReader(frameReader).inboundFlow(inboundFlow).encoder(encoder)
- .listener(listener).lifecycleManager(this).build();
- clientPrefaceString = clientPrefaceString(connection);
+ this(DefaultHttp2ConnectionDecoder.newBuilder().connection(connection)
+ .frameReader(frameReader).inboundFlow(inboundFlow).listener(listener),
+ DefaultHttp2ConnectionEncoder.newBuilder().connection(connection)
+ .frameWriter(frameWriter).outboundFlow(outboundFlow));
}
/**
diff --git a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
index abacae15b59..8aed6610b76 100644
--- a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
+++ b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
@@ -28,6 +28,7 @@
import io.netty.handler.codec.http2.DefaultHttp2FrameWriter;
import io.netty.handler.codec.http2.DefaultHttp2Headers;
import io.netty.handler.codec.http2.Http2Connection;
+import io.netty.handler.codec.http2.Http2ConnectionEncoder;
import io.netty.handler.codec.http2.Http2ConnectionHandler;
import io.netty.handler.codec.http2.Http2Exception;
import io.netty.handler.codec.http2.Http2FrameAdapter;
@@ -52,12 +53,13 @@ public class HelloWorldHttp2Handler extends Http2ConnectionHandler {
public HelloWorldHttp2Handler() {
this(new DefaultHttp2Connection(true), new Http2InboundFrameLogger(
new DefaultHttp2FrameReader(), logger), new Http2OutboundFrameLogger(
- new DefaultHttp2FrameWriter(), logger));
+ new DefaultHttp2FrameWriter(), logger), new SimpleHttp2FrameListener());
}
private HelloWorldHttp2Handler(Http2Connection connection, Http2FrameReader frameReader,
- Http2FrameWriter frameWriter) {
- super(connection, frameReader, frameWriter, new SimpleHttp2FrameListener(frameWriter));
+ Http2FrameWriter frameWriter, SimpleHttp2FrameListener listener) {
+ super(connection, frameReader, frameWriter, listener);
+ listener.encoder(encoder());
}
/**
@@ -83,10 +85,10 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
}
private static class SimpleHttp2FrameListener extends Http2FrameAdapter {
- private Http2FrameWriter frameWriter;
+ private Http2ConnectionEncoder encoder;
- public SimpleHttp2FrameListener(Http2FrameWriter frameWriter) {
- this.frameWriter = frameWriter;
+ public void encoder(Http2ConnectionEncoder encoder) {
+ this.encoder = encoder;
}
/**
@@ -118,8 +120,8 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId,
private void sendResponse(ChannelHandlerContext ctx, int streamId, ByteBuf payload) {
// Send a frame for the response status
Http2Headers headers = new DefaultHttp2Headers().status(new AsciiString("200"));
- frameWriter.writeHeaders(ctx, streamId, headers, 0, false, ctx.newPromise());
- frameWriter.writeData(ctx, streamId, payload, 0, true, ctx.newPromise());
+ encoder.writeHeaders(ctx, streamId, headers, 0, false, ctx.newPromise());
+ encoder.writeData(ctx, streamId, payload, 0, true, ctx.newPromise());
ctx.flush();
}
};
| null | train | train | 2014-10-10T00:13:57 | 2014-10-10T19:47:21Z | Scottmitch | val |
netty/netty/3013_3016 | netty/netty | netty/netty/3013 | netty/netty/3016 | [
"timestamp(timedelta=13.0, similarity=0.864423459913442)"
] | 5904c473164bc6849d0b4b9e7094da214e80c684 | dfe06fcddffd5a5a741427d2d1eaa5655949a135 | [
"@nmittler fyi\n",
"@Scottmitch this is addressed by #3016\n"
] | [] | 2014-10-16T15:51:44Z | [] | New twitter hpack release | [0.9.1](https://github.com/twitter/hpack/releases/tag/v0.9.1) was released. We should pull this in for the http2 codec and examples.
| [
"codec-http2/pom.xml",
"pom.xml"
] | [
"codec-http2/pom.xml",
"pom.xml"
] | [] | diff --git a/codec-http2/pom.xml b/codec-http2/pom.xml
index 5f79e09c96a..b3710b63f18 100644
--- a/codec-http2/pom.xml
+++ b/codec-http2/pom.xml
@@ -43,7 +43,6 @@
<dependency>
<groupId>com.twitter</groupId>
<artifactId>hpack</artifactId>
- <version>0.9.0</version>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
diff --git a/pom.xml b/pom.xml
index 9c8a2094f0d..cd2d767695a 100644
--- a/pom.xml
+++ b/pom.xml
@@ -462,6 +462,11 @@
</dependency>
<!-- SPDY and HTTP/2 - completely optional -->
+ <dependency>
+ <groupId>com.twitter</groupId>
+ <artifactId>hpack</artifactId>
+ <version>0.9.1</version>
+ </dependency>
<dependency>
<groupId>org.eclipse.jetty.npn</groupId>
<artifactId>npn-api</artifactId>
| null | val | train | 2014-10-16T10:57:10 | 2014-10-16T02:04:33Z | Scottmitch | val |
netty/netty/3027_3034 | netty/netty | netty/netty/3027 | netty/netty/3034 | [
"timestamp(timedelta=64.0, similarity=0.8550309335091857)"
] | 222d258d6e62b1b204c741d844033061a431ee75 | 30437db456b7f0be20cadb400d7197138995a83f | [
"@nmittler - FYI. There is some info in section [5.1.1](https://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-5.1.1) related to this condition.\n",
"For the writer portion of this it is also possible we put this responsibility on the users of the http2-codec? The http translation layer does allow user... | [
"@nmittler - Can you verify this change...I think this was a bug.\n",
"@nmittler - Note that I had an argument capture here to free the debug buffer but it is already freed.\n",
"@Scottmitch oof yeah I think you're right. Good catch!\n",
"I don't think we should catch it here. We should catch it in the enco... | 2014-10-22T00:17:15Z | [
"defect"
] | HTTP/2 stream id roll over behavior | It does not look like there is any code to handle the case where stream IDs roll over the largest number. What is the expected behavior in this condition? The client may want to open a new connection in this case to continue issuing requests.
Should this condition be checked for incoming and outgoing frames?
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/c... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/c... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 2775ed745fa..635d755dbc1 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -854,8 +854,8 @@ private void checkNewStreamAllowed(int streamId) throws Http2Exception {
}
private void verifyStreamId(int streamId) throws Http2Exception {
- if (nextStreamId < 0) {
- throw protocolError("No more streams can be created on this connection");
+ if (streamId < 0) {
+ throw new Http2NoMoreStreamIdsException();
}
if (streamId < nextStreamId) {
throw protocolError("Request stream %d is behind the next expected stream %d", streamId, nextStreamId);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index 05e710b7cbf..70f2d6592e7 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -239,6 +239,9 @@ public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2
}
}
} catch (Throwable e) {
+ if (e instanceof Http2NoMoreStreamIdsException) {
+ lifecycleManager.onException(ctx, e);
+ }
return promise.setFailure(e);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 19dbd0ef701..d09d0d27542 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -329,7 +329,8 @@ public ChannelFuture writeRstStream(ChannelHandlerContext ctx, int streamId, lon
*/
public ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, long errorCode, ByteBuf debugData,
ChannelPromise promise) {
- if (connection().isGoAway()) {
+ Http2Connection connection = connection();
+ if (connection.isGoAway()) {
debugData.release();
return ctx.newSucceededFuture();
}
@@ -337,7 +338,7 @@ public ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, lo
ChannelFuture future = frameWriter().writeGoAway(ctx, lastStreamId, errorCode, debugData, promise);
ctx.flush();
- connection().goAwaySent(lastStreamId);
+ connection.goAwaySent(lastStreamId);
return future;
}
@@ -345,15 +346,16 @@ public ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, lo
* Sends a {@code GO_AWAY} frame appropriate for the given exception.
*/
private ChannelFuture writeGoAway(ChannelHandlerContext ctx, Http2Exception cause) {
- if (connection().isGoAway()) {
+ Http2Connection connection = connection();
+ if (connection.isGoAway()) {
return ctx.newSucceededFuture();
}
// The connection isn't alredy going away, send the GO_AWAY frame now to start
// the process.
- int errorCode = cause != null ? cause.error().code() : NO_ERROR.code();
+ long errorCode = cause != null ? cause.error().code() : NO_ERROR.code();
ByteBuf debugData = Http2CodecUtil.toByteBuf(ctx, cause);
- int lastKnownStream = connection().remote().lastStreamCreated();
+ int lastKnownStream = connection.remote().lastStreamCreated();
return writeGoAway(ctx, lastKnownStream, errorCode, debugData, ctx.newPromise());
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Error.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Error.java
index 5fa92d2403b..0584adc18c5 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Error.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Error.java
@@ -33,16 +33,16 @@ public enum Http2Error {
ENHANCE_YOUR_CALM(0xB),
INADEQUATE_SECURITY(0xC);
- private final int code;
+ private final long code;
- Http2Error(int code) {
+ Http2Error(long code) {
this.code = code;
}
/**
* Gets the code for this error used on the wire.
*/
- public int code() {
+ public long code() {
return code;
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2NoMoreStreamIdsException.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2NoMoreStreamIdsException.java
new file mode 100644
index 00000000000..7a79777386f
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2NoMoreStreamIdsException.java
@@ -0,0 +1,31 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+/**
+ * This exception is thrown when there are no more stream IDs available for the current connection
+ */
+public class Http2NoMoreStreamIdsException extends Http2Exception {
+ private static final long serialVersionUID = -7756236161274851110L;
+ private static final String ERROR_MESSAGE = "No more streams can be created on this connection";
+
+ public Http2NoMoreStreamIdsException() {
+ super(Http2Error.PROTOCOL_ERROR, ERROR_MESSAGE);
+ }
+
+ public Http2NoMoreStreamIdsException(Throwable cause) {
+ super(Http2Error.PROTOCOL_ERROR, ERROR_MESSAGE, cause);
+ }
+}
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
index 6aaf1682938..6491ced9015 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
@@ -217,6 +217,28 @@ public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
assertTrue(clientChannel.isOpen());
}
+ @Test
+ public void noMoreStreamIdsShouldSendGoAway() throws Exception {
+ bootstrapEnv(1, 3);
+
+ // Create a single stream by sending a HEADERS frame to the server.
+ final Http2Headers headers = dummyHeaders();
+ runInChannel(clientChannel, new Http2Runnable() {
+ @Override
+ public void run() {
+ http2Client.encoder().writeHeaders(ctx(), 3, headers, 0, (short) 16, false, 0,
+ true, newPromise());
+ http2Client.encoder().writeHeaders(ctx(), Integer.MAX_VALUE + 1, headers, 0, (short) 16, false, 0,
+ true, newPromise());
+ }
+ });
+
+ // Wait for the server to create the stream.
+ assertTrue(requestLatch.await(5, TimeUnit.SECONDS));
+ verify(serverListener).onGoAwayRead(any(ChannelHandlerContext.class), eq(0),
+ eq(Http2Error.PROTOCOL_ERROR.code()), any(ByteBuf.class));
+ }
+
@Test
public void flowControlProperlyChunksLargeMessage() throws Exception {
final Http2Headers headers = dummyHeaders();
| train | train | 2014-10-22T15:26:31 | 2014-10-20T14:38:38Z | Scottmitch | val |
netty/netty/3017_3041 | netty/netty | netty/netty/3017 | netty/netty/3041 | [
"timestamp(timedelta=13446.0, similarity=0.9305338663498393)"
] | f8af84d5993456426a63ad0146479147b1a4a5e5 | 5e86822c08668b93f81131948f190ba30a3a95fc | [
"@trustin @nmittler @normanmaurer - This is more of a question at this point because I see that the existing JdkSslClientContext uses `null` for the `SSLContext.init(...)` calls for everything but the `TrustStore`. See [here](https://github.com/netty/netty/blob/master/handler/src/main/java/io/netty/handler/ssl/Jdk... | [] | 2014-10-22T17:38:43Z | [
"feature"
] | ssl mutual authentication (JDK provider) | I have a usecase that requires a client to do mutual authentication. I have seen a few stackoverflow posts related to achieving mutual auth by setting up the `SSLContext` manually, and then invoking `SSLEngine.setNeedClientAuth(true)` before putting this SSLEngine into the pipeline. I am using `SslContext.newClientContext` and this seems like a convenient abstraction provided by netty (I would like to continue to use this if possible). Would extending these `SslContext` constructors to take an additional argument of `KeyManagerFactory` be sufficient (and desired) to allowing the Netty abstraction to support building an `SSLContext` object that supports mutual auth?
| [
"handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java",
"handler/src/main/java/io/netty/handler/ssl/SslContext.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java",
"handler/src/main/java/io/netty/handler/ssl/SslContext.java"
] | [
"handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java",
"handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java",
"handler/src/test/resources/io/netty/handler/ssl/test2.crt",
"handler/src/test/resources/io/netty/handler/ssl/test2_encrypted.pem",
"handler/src/test/resources/io/ne... | diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java
index 7d4e7166c86..479bb24e570 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslClientContext.java
@@ -16,20 +16,16 @@
package io.netty.handler.ssl;
-import io.netty.buffer.ByteBuf;
-import io.netty.buffer.ByteBufInputStream;
import java.io.File;
-import java.security.KeyStore;
-import java.security.cert.CertificateFactory;
-import java.security.cert.X509Certificate;
+import javax.net.ssl.KeyManager;
+import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLException;
import javax.net.ssl.SSLSessionContext;
import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
-import javax.security.auth.x500.X500Principal;
/**
* A client-side {@link SslContext} which uses JDK's SSL/TLS implementation.
@@ -126,46 +122,93 @@ public JdkSslClientContext(
File certChainFile, TrustManagerFactory trustManagerFactory,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, JdkApplicationProtocolNegotiator apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
+ this(certChainFile, trustManagerFactory, null, null, null, null,
+ ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ }
+
+ /**
+ * Creates a new instance.
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from servers.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the public key for mutual authentication.
+ * {@code null} to use the system default
+ * @param keyFile a PKCS#8 private key file in PEM format.
+ * This provides the private key for mutual authentication.
+ * {@code null} for no mutual authentication.
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * Ignored if {@code keyFile} is {@code null}.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to servers.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * @param apn Provides a means to configure parameters related to application protocol negotiation.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ */
+ public JdkSslClientContext(File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
+ Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
+ this(trustCertChainFile, trustManagerFactory, keyCertChainFile, keyFile, keyPassword, keyManagerFactory,
+ ciphers, cipherFilter, toNegotiator(apn, false), sessionCacheSize, sessionTimeout);
+ }
+ /**
+ * Creates a new instance.
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from servers.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the public key for mutual authentication.
+ * {@code null} to use the system default
+ * @param keyFile a PKCS#8 private key file in PEM format.
+ * This provides the private key for mutual authentication.
+ * {@code null} for no mutual authentication.
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * Ignored if {@code keyFile} is {@code null}.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to servers.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * @param apn Application Protocol Negotiator object.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ */
+ public JdkSslClientContext(File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
+ Iterable<String> ciphers, CipherSuiteFilter cipherFilter, JdkApplicationProtocolNegotiator apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
super(ciphers, cipherFilter, apn);
try {
- if (certChainFile == null) {
- ctx = SSLContext.getInstance(PROTOCOL);
- if (trustManagerFactory == null) {
- ctx.init(null, null, null);
- } else {
- trustManagerFactory.init((KeyStore) null);
- ctx.init(null, trustManagerFactory.getTrustManagers(), null);
- }
- } else {
- KeyStore ks = KeyStore.getInstance("JKS");
- ks.load(null, null);
- CertificateFactory cf = CertificateFactory.getInstance("X.509");
-
- ByteBuf[] certs = PemReader.readCertificates(certChainFile);
- try {
- for (ByteBuf buf: certs) {
- X509Certificate cert = (X509Certificate) cf.generateCertificate(new ByteBufInputStream(buf));
- X500Principal principal = cert.getSubjectX500Principal();
- ks.setCertificateEntry(principal.getName("RFC2253"), cert);
- }
- } finally {
- for (ByteBuf buf: certs) {
- buf.release();
- }
- }
-
- // Set up trust manager factory to use our key store.
- if (trustManagerFactory == null) {
- trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
- }
- trustManagerFactory.init(ks);
-
- // Initialize the SSLContext to work with the trust managers.
- ctx = SSLContext.getInstance(PROTOCOL);
- ctx.init(null, trustManagerFactory.getTrustManagers(), null);
+ if (trustCertChainFile != null) {
+ trustManagerFactory = buildTrustManagerFactory(trustCertChainFile, trustManagerFactory);
+ }
+ if (keyFile != null) {
+ keyManagerFactory = buildKeyManagerFactory(keyCertChainFile, keyFile, keyPassword, keyManagerFactory);
}
+ ctx = SSLContext.getInstance(PROTOCOL);
+ ctx.init(keyManagerFactory == null ? null : keyManagerFactory.getKeyManagers(),
+ trustManagerFactory == null ? null : trustManagerFactory.getTrustManagers(),
+ null);
SSLSessionContext sessCtx = ctx.getClientSessionContext();
if (sessionCacheSize > 0) {
@@ -175,7 +218,7 @@ public JdkSslClientContext(
sessCtx.setSessionTimeout((int) Math.min(sessionTimeout, Integer.MAX_VALUE));
}
} catch (Exception e) {
- throw new SSLException("failed to initialize the server-side SSL context", e);
+ throw new SSLException("failed to initialize the client-side SSL context", e);
}
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
index e998bc06b23..08ae41b1124 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
@@ -17,10 +17,30 @@
package io.netty.handler.ssl;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
+import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
+import io.netty.buffer.ByteBufInputStream;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
+import java.io.File;
+import java.io.IOException;
+import java.security.InvalidAlgorithmParameterException;
+import java.security.InvalidKeyException;
+import java.security.KeyException;
+import java.security.KeyFactory;
+import java.security.KeyStore;
+import java.security.KeyStoreException;
+import java.security.NoSuchAlgorithmException;
+import java.security.PrivateKey;
+import java.security.Security;
+import java.security.UnrecoverableKeyException;
+import java.security.cert.Certificate;
+import java.security.cert.CertificateException;
+import java.security.cert.CertificateFactory;
+import java.security.cert.X509Certificate;
+import java.security.spec.InvalidKeySpecException;
+import java.security.spec.PKCS8EncodedKeySpec;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
@@ -28,9 +48,18 @@
import java.util.List;
import java.util.Set;
+import javax.crypto.Cipher;
+import javax.crypto.EncryptedPrivateKeyInfo;
+import javax.crypto.NoSuchPaddingException;
+import javax.crypto.SecretKey;
+import javax.crypto.SecretKeyFactory;
+import javax.crypto.spec.PBEKeySpec;
+import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLEngine;
import javax.net.ssl.SSLSessionContext;
+import javax.net.ssl.TrustManagerFactory;
+import javax.security.auth.x500.X500Principal;
/**
* An {@link SslContext} which uses JDK's SSL/TLS implementation.
@@ -257,4 +286,156 @@ static JdkApplicationProtocolNegotiator toNegotiator(ApplicationProtocolConfig c
.append(config.protocol()).append(" protocol").toString());
}
}
+
+ /**
+ * Build a {@link KeyManagerFactory} based upon a key file, key file password, and a certificate chain.
+ * @param certChainFile a X.509 certificate chain file in PEM format
+ * @param keyFile a PKCS#8 private key file in PEM format
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * @param kmf The existing {@link KeyManagerFactory} that will be used if not {@code null}
+ * @return A {@link KeyManagerFactory} based upon a key file, key file password, and a certificate chain.
+ */
+ protected static KeyManagerFactory buildKeyManagerFactory(File certChainFile, File keyFile, String keyPassword,
+ KeyManagerFactory kmf)
+ throws UnrecoverableKeyException, KeyStoreException, NoSuchAlgorithmException,
+ NoSuchPaddingException, InvalidKeySpecException, InvalidAlgorithmParameterException,
+ CertificateException, KeyException, IOException {
+ String algorithm = Security.getProperty("ssl.KeyManagerFactory.algorithm");
+ if (algorithm == null) {
+ algorithm = "SunX509";
+ }
+ return buildKeyManagerFactory(certChainFile, algorithm, keyFile, keyPassword, kmf);
+ }
+
+ /**
+ * Build a {@link KeyManagerFactory} based upon a key algorithm, key file, key file password,
+ * and a certificate chain.
+ * @param certChainFile a X.509 certificate chain file in PEM format
+ * @param keyAlgorithm the standard name of the requested algorithm. See the Java Secure Socket Extension
+ * Reference Guide for information about standard algorithm names.
+ * @param keyFile a PKCS#8 private key file in PEM format
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * @param kmf The existing {@link KeyManagerFactory} that will be used if not {@code null}
+ * @return A {@link KeyManagerFactory} based upon a key algorithm, key file, key file password,
+ * and a certificate chain.
+ */
+ protected static KeyManagerFactory buildKeyManagerFactory(File certChainFile,
+ String keyAlgorithm, File keyFile, String keyPassword, KeyManagerFactory kmf)
+ throws KeyStoreException, NoSuchAlgorithmException, NoSuchPaddingException,
+ InvalidKeySpecException, InvalidAlgorithmParameterException, IOException,
+ CertificateException, KeyException, UnrecoverableKeyException {
+ KeyStore ks = KeyStore.getInstance("JKS");
+ ks.load(null, null);
+ CertificateFactory cf = CertificateFactory.getInstance("X.509");
+ KeyFactory rsaKF = KeyFactory.getInstance("RSA");
+ KeyFactory dsaKF = KeyFactory.getInstance("DSA");
+
+ ByteBuf encodedKeyBuf = PemReader.readPrivateKey(keyFile);
+ byte[] encodedKey = new byte[encodedKeyBuf.readableBytes()];
+ encodedKeyBuf.readBytes(encodedKey).release();
+
+ char[] keyPasswordChars = keyPassword == null ? new char[0] : keyPassword.toCharArray();
+ PKCS8EncodedKeySpec encodedKeySpec = generateKeySpec(keyPasswordChars, encodedKey);
+
+ PrivateKey key;
+ try {
+ key = rsaKF.generatePrivate(encodedKeySpec);
+ } catch (InvalidKeySpecException ignore) {
+ key = dsaKF.generatePrivate(encodedKeySpec);
+ }
+
+ List<Certificate> certChain = new ArrayList<Certificate>();
+ ByteBuf[] certs = PemReader.readCertificates(certChainFile);
+ try {
+ for (ByteBuf buf: certs) {
+ certChain.add(cf.generateCertificate(new ByteBufInputStream(buf)));
+ }
+ } finally {
+ for (ByteBuf buf: certs) {
+ buf.release();
+ }
+ }
+
+ ks.setKeyEntry("key", key, keyPasswordChars, certChain.toArray(new Certificate[certChain.size()]));
+
+ // Set up key manager factory to use our key store
+ if (kmf == null) {
+ kmf = KeyManagerFactory.getInstance(keyAlgorithm);
+ }
+ kmf.init(ks, keyPasswordChars);
+
+ return kmf;
+ }
+
+ /**
+ * Build a {@link TrustManagerFactory} from a certificate chain file.
+ * @param certChainFile The certificate file to build from.
+ * @param trustManagerFactory The existing {@link TrustManagerFactory} that will be used if not {@code null}.
+ * @return A {@link TrustManagerFactory} which contains the certificates in {@code certChainFile}
+ */
+ protected static TrustManagerFactory buildTrustManagerFactory(File certChainFile,
+ TrustManagerFactory trustManagerFactory)
+ throws NoSuchAlgorithmException, CertificateException, KeyStoreException, IOException {
+ KeyStore ks = KeyStore.getInstance("JKS");
+ ks.load(null, null);
+ CertificateFactory cf = CertificateFactory.getInstance("X.509");
+
+ ByteBuf[] certs = PemReader.readCertificates(certChainFile);
+ try {
+ for (ByteBuf buf: certs) {
+ X509Certificate cert = (X509Certificate) cf.generateCertificate(new ByteBufInputStream(buf));
+ X500Principal principal = cert.getSubjectX500Principal();
+ ks.setCertificateEntry(principal.getName("RFC2253"), cert);
+ }
+ } finally {
+ for (ByteBuf buf: certs) {
+ buf.release();
+ }
+ }
+
+ // Set up trust manager factory to use our key store.
+ if (trustManagerFactory == null) {
+ trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
+ }
+ trustManagerFactory.init(ks);
+
+ return trustManagerFactory;
+ }
+
+ /**
+ * Generates a key specification for an (encrypted) private key.
+ *
+ * @param password characters, if {@code null} or empty an unencrypted key is assumed
+ * @param key bytes of the DER encoded private key
+ *
+ * @return a key specification
+ *
+ * @throws IOException if parsing {@code key} fails
+ * @throws NoSuchAlgorithmException if the algorithm used to encrypt {@code key} is unkown
+ * @throws NoSuchPaddingException if the padding scheme specified in the decryption algorithm is unkown
+ * @throws InvalidKeySpecException if the decryption key based on {@code password} cannot be generated
+ * @throws InvalidKeyException if the decryption key based on {@code password} cannot be used to decrypt
+ * {@code key}
+ * @throws InvalidAlgorithmParameterException if decryption algorithm parameters are somehow faulty
+ */
+ private static PKCS8EncodedKeySpec generateKeySpec(char[] password, byte[] key)
+ throws IOException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeySpecException,
+ InvalidKeyException, InvalidAlgorithmParameterException {
+
+ if (password == null || password.length == 0) {
+ return new PKCS8EncodedKeySpec(key);
+ }
+
+ EncryptedPrivateKeyInfo encryptedPrivateKeyInfo = new EncryptedPrivateKeyInfo(key);
+ SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(encryptedPrivateKeyInfo.getAlgName());
+ PBEKeySpec pbeKeySpec = new PBEKeySpec(password);
+ SecretKey pbeKey = keyFactory.generateSecret(pbeKeySpec);
+
+ Cipher cipher = Cipher.getInstance(encryptedPrivateKeyInfo.getAlgName());
+ cipher.init(Cipher.DECRYPT_MODE, pbeKey, encryptedPrivateKeyInfo.getAlgParameters());
+
+ return encryptedPrivateKeyInfo.getKeySpec(cipher);
+ }
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
index 9a22a7c1ce1..118f8f37f0c 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslServerContext.java
@@ -16,35 +16,15 @@
package io.netty.handler.ssl;
-import io.netty.buffer.ByteBuf;
-import io.netty.buffer.ByteBufInputStream;
-
import java.io.File;
-import java.io.IOException;
-import java.security.InvalidAlgorithmParameterException;
-import java.security.InvalidKeyException;
-import java.security.KeyFactory;
-import java.security.KeyStore;
-import java.security.NoSuchAlgorithmException;
-import java.security.PrivateKey;
-import java.security.Security;
-import java.security.cert.Certificate;
-import java.security.cert.CertificateFactory;
-import java.security.spec.InvalidKeySpecException;
-import java.security.spec.PKCS8EncodedKeySpec;
-import java.util.ArrayList;
-import java.util.List;
-
-import javax.crypto.Cipher;
-import javax.crypto.EncryptedPrivateKeyInfo;
-import javax.crypto.NoSuchPaddingException;
-import javax.crypto.SecretKey;
-import javax.crypto.SecretKeyFactory;
-import javax.crypto.spec.PBEKeySpec;
+
+import javax.net.ssl.KeyManager;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLException;
import javax.net.ssl.SSLSessionContext;
+import javax.net.ssl.TrustManager;
+import javax.net.ssl.TrustManagerFactory;
/**
* A server-side {@link SslContext} which uses JDK's SSL/TLS implementation.
@@ -120,67 +100,92 @@ public JdkSslServerContext(
File certChainFile, File keyFile, String keyPassword,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, JdkApplicationProtocolNegotiator apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
+ this(null, null, certChainFile, keyFile, keyPassword, null,
+ ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ }
- super(ciphers, cipherFilter, apn);
-
- if (certChainFile == null) {
- throw new NullPointerException("certChainFile");
- }
- if (keyFile == null) {
- throw new NullPointerException("keyFile");
- }
-
- if (keyPassword == null) {
- keyPassword = "";
- }
+ /**
+ * Creates a new instance.
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the certificate chains used for mutual authentication.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from clients.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}.
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format
+ * @param keyFile a PKCS#8 private key file in PEM format
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to clients.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * Only required if {@code provider} is {@link SslProvider#JDK}
+ * @param apn Provides a means to configure parameters related to application protocol negotiation.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ */
+ public JdkSslServerContext(File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
+ Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
+ this(trustCertChainFile, trustManagerFactory, keyCertChainFile, keyFile, keyPassword, keyManagerFactory,
+ ciphers, cipherFilter, toNegotiator(apn, true), sessionCacheSize, sessionTimeout);
+ }
- String algorithm = Security.getProperty("ssl.KeyManagerFactory.algorithm");
- if (algorithm == null) {
- algorithm = "SunX509";
+ /**
+ * Creates a new instance.
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the certificate chains used for mutual authentication.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from clients.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format
+ * @param keyFile a PKCS#8 private key file in PEM format
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to clients.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * Only required if {@code provider} is {@link SslProvider#JDK}
+ * @param apn Application Protocol Negotiator object.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ */
+ public JdkSslServerContext(File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
+ Iterable<String> ciphers, CipherSuiteFilter cipherFilter, JdkApplicationProtocolNegotiator apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
+ super(ciphers, cipherFilter, apn);
+ if (keyFile == null && keyManagerFactory == null) {
+ throw new NullPointerException("keyFile, keyManagerFactory");
}
try {
- KeyStore ks = KeyStore.getInstance("JKS");
- ks.load(null, null);
- CertificateFactory cf = CertificateFactory.getInstance("X.509");
- KeyFactory rsaKF = KeyFactory.getInstance("RSA");
- KeyFactory dsaKF = KeyFactory.getInstance("DSA");
-
- ByteBuf encodedKeyBuf = PemReader.readPrivateKey(keyFile);
- byte[] encodedKey = new byte[encodedKeyBuf.readableBytes()];
- encodedKeyBuf.readBytes(encodedKey).release();
-
- char[] keyPasswordChars = keyPassword.toCharArray();
- PKCS8EncodedKeySpec encodedKeySpec = generateKeySpec(keyPasswordChars, encodedKey);
-
- PrivateKey key;
- try {
- key = rsaKF.generatePrivate(encodedKeySpec);
- } catch (InvalidKeySpecException ignore) {
- key = dsaKF.generatePrivate(encodedKeySpec);
+ if (trustCertChainFile != null) {
+ trustManagerFactory = buildTrustManagerFactory(trustCertChainFile, trustManagerFactory);
}
-
- List<Certificate> certChain = new ArrayList<Certificate>();
- ByteBuf[] certs = PemReader.readCertificates(certChainFile);
- try {
- for (ByteBuf buf: certs) {
- certChain.add(cf.generateCertificate(new ByteBufInputStream(buf)));
- }
- } finally {
- for (ByteBuf buf: certs) {
- buf.release();
- }
+ if (keyFile != null) {
+ keyManagerFactory = buildKeyManagerFactory(keyCertChainFile, keyFile, keyPassword, keyManagerFactory);
}
- ks.setKeyEntry("key", key, keyPasswordChars, certChain.toArray(new Certificate[certChain.size()]));
-
- // Set up key manager factory to use our key store
- KeyManagerFactory kmf = KeyManagerFactory.getInstance(algorithm);
- kmf.init(ks, keyPasswordChars);
-
// Initialize the SSLContext to work with our key managers.
ctx = SSLContext.getInstance(PROTOCOL);
- ctx.init(kmf.getKeyManagers(), null, null);
+ ctx.init(keyManagerFactory.getKeyManagers(),
+ trustManagerFactory == null ? null : trustManagerFactory.getTrustManagers(),
+ null);
SSLSessionContext sessCtx = ctx.getServerSessionContext();
if (sessionCacheSize > 0) {
@@ -203,39 +208,4 @@ public boolean isClient() {
public SSLContext context() {
return ctx;
}
-
- /**
- * Generates a key specification for an (encrypted) private key.
- *
- * @param password characters, if {@code null} or empty an unencrypted key is assumed
- * @param key bytes of the DER encoded private key
- *
- * @return a key specification
- *
- * @throws IOException if parsing {@code key} fails
- * @throws NoSuchAlgorithmException if the algorithm used to encrypt {@code key} is unkown
- * @throws NoSuchPaddingException if the padding scheme specified in the decryption algorithm is unkown
- * @throws InvalidKeySpecException if the decryption key based on {@code password} cannot be generated
- * @throws InvalidKeyException if the decryption key based on {@code password} cannot be used to decrypt
- * {@code key}
- * @throws InvalidAlgorithmParameterException if decryption algorithm parameters are somehow faulty
- */
- private static PKCS8EncodedKeySpec generateKeySpec(char[] password, byte[] key)
- throws IOException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeySpecException,
- InvalidKeyException, InvalidAlgorithmParameterException {
-
- if (password == null || password.length == 0) {
- return new PKCS8EncodedKeySpec(key);
- }
-
- EncryptedPrivateKeyInfo encryptedPrivateKeyInfo = new EncryptedPrivateKeyInfo(key);
- SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(encryptedPrivateKeyInfo.getAlgName());
- PBEKeySpec pbeKeySpec = new PBEKeySpec(password);
- SecretKey pbeKey = keyFactory.generateSecret(pbeKeySpec);
-
- Cipher cipher = Cipher.getInstance(encryptedPrivateKeyInfo.getAlgName());
- cipher.init(Cipher.DECRYPT_MODE, pbeKey, encryptedPrivateKeyInfo.getAlgParameters());
-
- return encryptedPrivateKeyInfo.getKeySpec(cipher);
- }
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslContext.java b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
index 494c4f25d72..38abd598e77 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
@@ -23,6 +23,8 @@
import java.io.File;
import java.util.List;
+import javax.net.ssl.KeyManager;
+import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLEngine;
import javax.net.ssl.SSLException;
@@ -176,11 +178,50 @@ public static SslContext newServerContext(
* {@code 0} to use the default value.
* @return a new server-side {@link SslContext}
*/
- public static SslContext newServerContext(
- SslProvider provider,
+ public static SslContext newServerContext(SslProvider provider,
File certChainFile, File keyFile, String keyPassword,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
+ return newServerContext(provider, null, null, certChainFile, keyFile, keyPassword, null,
+ ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ }
+
+ /**
+ * Creates a new server-side {@link SslContext}.
+ * @param provider the {@link SslContext} implementation to use.
+ * {@code null} to use the current default one.
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the certificate chains used for mutual authentication.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from clients.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}.
+ * This parameter is ignored if {@code provider} is not {@link SslProvider#JDK}.
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format
+ * @param keyFile a PKCS#8 private key file in PEM format
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to clients.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * This parameter is ignored if {@code provider} is not {@link SslProvider#JDK}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * Only required if {@code provider} is {@link SslProvider#JDK}
+ * @param apn Provides a means to configure parameters related to application protocol negotiation.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ * @return a new server-side {@link SslContext}
+ */
+ public static SslContext newServerContext(SslProvider provider,
+ File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
+ Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
if (provider == null) {
provider = OpenSsl.isAvailable()? SslProvider.OPENSSL : SslProvider.JDK;
@@ -189,11 +230,14 @@ public static SslContext newServerContext(
switch (provider) {
case JDK:
return new JdkSslServerContext(
- certChainFile, keyFile, keyPassword,
- ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ trustCertChainFile, trustManagerFactory, keyCertChainFile, keyFile, keyPassword,
+ keyManagerFactory, ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
case OPENSSL:
+ if (trustCertChainFile != null) {
+ throw new UnsupportedOperationException("OpenSSL provider does not support mutual authentication");
+ }
return new OpenSslServerContext(
- certChainFile, keyFile, keyPassword,
+ keyCertChainFile, keyFile, keyPassword,
ciphers, apn, sessionCacheSize, sessionTimeout);
default:
throw new Error(provider.toString());
@@ -359,19 +403,61 @@ public static SslContext newClientContext(
*
* @return a new client-side {@link SslContext}
*/
- public static SslContext newClientContext(
- SslProvider provider,
+ public static SslContext newClientContext(SslProvider provider,
File certChainFile, TrustManagerFactory trustManagerFactory,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
+ return newClientContext(provider, certChainFile, trustManagerFactory, null, null, null, null,
+ ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ }
+
+ /**
+ * Creates a new client-side {@link SslContext}.
+ * @param provider the {@link SslContext} implementation to use.
+ * {@code null} to use the current default one.
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from servers.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}.
+ * This parameter is ignored if {@code provider} is not {@link SslProvider#JDK}.
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the public key for mutual authentication.
+ * {@code null} to use the system default
+ * @param keyFile a PKCS#8 private key file in PEM format.
+ * This provides the private key for mutual authentication.
+ * {@code null} for no mutual authentication.
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * Ignored if {@code keyFile} is {@code null}.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to servers.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * This parameter is ignored if {@code provider} is not {@link SslProvider#JDK}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * @param apn Provides a means to configure parameters related to application protocol negotiation.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ *
+ * @return a new client-side {@link SslContext}
+ */
+ public static SslContext newClientContext(SslProvider provider,
+ File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
+ Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
if (provider != null && provider != SslProvider.JDK) {
throw new SSLException("client context unsupported for: " + provider);
}
- return new JdkSslClientContext(
- certChainFile, trustManagerFactory,
- ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ return new JdkSslClientContext(trustCertChainFile, trustManagerFactory, keyCertChainFile, keyFile, keyPassword,
+ keyManagerFactory, ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
}
SslContext() { }
| diff --git a/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java
index 3b3f60b59a7..41b2da74364 100644
--- a/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java
+++ b/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java
@@ -41,6 +41,7 @@
import io.netty.util.NetUtil;
import io.netty.util.concurrent.Future;
+import java.io.File;
import java.net.InetSocketAddress;
import java.security.cert.CertificateException;
import java.util.List;
@@ -315,6 +316,57 @@ public String select(List<String> protocols) {
}
}
+ @Test
+ public void testMutualAuthSameCerts() throws Exception {
+ mySetupMutualAuth(new File(getClass().getResource("test_unencrypted.pem").getFile()),
+ new File(getClass().getResource("test.crt").getFile()),
+ null);
+ runTest(null);
+ }
+
+ @Test
+ public void testMutualAuthDiffCerts() throws Exception {
+ File serverKeyFile = new File(getClass().getResource("test_encrypted.pem").getFile());
+ File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
+ String serverKeyPassword = "12345";
+ File clientKeyFile = new File(getClass().getResource("test2_encrypted.pem").getFile());
+ File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
+ String clientKeyPassword = "12345";
+ mySetupMutualAuth(clientCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
+ serverCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
+ runTest(null);
+ }
+
+ @Test
+ public void testMutualAuthDiffCertsServerFailure() throws Exception {
+ File serverKeyFile = new File(getClass().getResource("test_encrypted.pem").getFile());
+ File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
+ String serverKeyPassword = "12345";
+ File clientKeyFile = new File(getClass().getResource("test2_encrypted.pem").getFile());
+ File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
+ String clientKeyPassword = "12345";
+ // Client trusts server but server only trusts itself
+ mySetupMutualAuth(serverCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
+ serverCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
+ assertTrue(serverLatch.await(2, TimeUnit.SECONDS));
+ assertTrue(serverException instanceof SSLHandshakeException);
+ }
+
+ @Test
+ public void testMutualAuthDiffCertsClientFailure() throws Exception {
+ File serverKeyFile = new File(getClass().getResource("test_unencrypted.pem").getFile());
+ File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
+ String serverKeyPassword = null;
+ File clientKeyFile = new File(getClass().getResource("test2_unencrypted.pem").getFile());
+ File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
+ String clientKeyPassword = null;
+ // Server trusts client but client only trusts itself
+ mySetupMutualAuth(clientCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
+ clientCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
+ assertTrue(clientLatch.await(2, TimeUnit.SECONDS));
+ assertTrue(clientException instanceof SSLHandshakeException);
+ }
+
private void mySetup(JdkApplicationProtocolNegotiator apn) throws InterruptedException, SSLException,
CertificateException {
mySetup(apn, apn);
@@ -385,6 +437,82 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E
clientChannel = ccf.channel();
}
+ private void mySetupMutualAuth(File keyFile, File crtFile, String keyPassword)
+ throws SSLException, CertificateException, InterruptedException {
+ mySetupMutualAuth(crtFile, keyFile, crtFile, keyPassword, crtFile, keyFile, crtFile, keyPassword);
+ }
+
+ private void mySetupMutualAuth(
+ File servertTrustCrtFile, File serverKeyFile, File serverCrtFile, String serverKeyPassword,
+ File clientTrustCrtFile, File clientKeyFile, File clientCrtFile, String clientKeyPassword)
+ throws InterruptedException, SSLException, CertificateException {
+ serverSslCtx = new JdkSslServerContext(servertTrustCrtFile, null,
+ serverCrtFile, serverKeyFile, serverKeyPassword, null,
+ null, IdentityCipherSuiteFilter.INSTANCE, (ApplicationProtocolConfig) null, 0, 0);
+ clientSslCtx = new JdkSslClientContext(clientTrustCrtFile, null,
+ clientCrtFile, clientKeyFile, clientKeyPassword, null,
+ null, IdentityCipherSuiteFilter.INSTANCE, (ApplicationProtocolConfig) null, 0, 0);
+
+ serverConnectedChannel = null;
+ sb = new ServerBootstrap();
+ cb = new Bootstrap();
+
+ sb.group(new NioEventLoopGroup(), new NioEventLoopGroup());
+ sb.channel(NioServerSocketChannel.class);
+ sb.childHandler(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ChannelPipeline p = ch.pipeline();
+ SSLEngine engine = serverSslCtx.newEngine(ch.alloc());
+ engine.setUseClientMode(false);
+ engine.setNeedClientAuth(true);
+ p.addLast(new SslHandler(engine));
+ p.addLast(new MessageDelegatorChannelHandler(serverReceiver, serverLatch));
+ p.addLast(new ChannelHandlerAdapter() {
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ if (cause.getCause() instanceof SSLHandshakeException) {
+ serverException = cause.getCause();
+ serverLatch.countDown();
+ } else {
+ ctx.fireExceptionCaught(cause);
+ }
+ }
+ });
+ serverConnectedChannel = ch;
+ }
+ });
+
+ cb.group(new NioEventLoopGroup());
+ cb.channel(NioSocketChannel.class);
+ cb.handler(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ChannelPipeline p = ch.pipeline();
+ p.addLast(clientSslCtx.newHandler(ch.alloc()));
+ p.addLast(new MessageDelegatorChannelHandler(clientReceiver, clientLatch));
+ p.addLast(new ChannelHandlerAdapter() {
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ if (cause.getCause() instanceof SSLHandshakeException) {
+ clientException = cause.getCause();
+ clientLatch.countDown();
+ } else {
+ ctx.fireExceptionCaught(cause);
+ }
+ }
+ });
+ }
+ });
+
+ serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel();
+ int port = ((InetSocketAddress) serverChannel.localAddress()).getPort();
+
+ ChannelFuture ccf = cb.connect(new InetSocketAddress(NetUtil.LOCALHOST, port));
+ assertTrue(ccf.awaitUninterruptibly().isSuccess());
+ clientChannel = ccf.channel();
+ }
+
private void runTest() throws Exception {
runTest(APPLICATION_LEVEL_PROTOCOL);
}
@@ -395,8 +523,10 @@ private void runTest(String expectedApplicationProtocol) throws Exception {
try {
writeAndVerifyReceived(clientMessage.retain(), clientChannel, serverLatch, serverReceiver);
writeAndVerifyReceived(serverMessage.retain(), serverConnectedChannel, clientLatch, clientReceiver);
- verifyApplicationLevelProtocol(clientChannel, expectedApplicationProtocol);
- verifyApplicationLevelProtocol(serverConnectedChannel, expectedApplicationProtocol);
+ if (expectedApplicationProtocol != null) {
+ verifyApplicationLevelProtocol(clientChannel, expectedApplicationProtocol);
+ verifyApplicationLevelProtocol(serverConnectedChannel, expectedApplicationProtocol);
+ }
} finally {
clientMessage.release();
serverMessage.release();
diff --git a/handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java b/handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java
index c196b40f28a..9eccd91b03d 100644
--- a/handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java
+++ b/handler/src/test/java/io/netty/handler/ssl/JdkSslServerContextTest.java
@@ -31,6 +31,14 @@ public void testJdkSslServerWithEncryptedPrivateKey() throws SSLException {
new JdkSslServerContext(crtFile, keyFile, "12345");
}
+ @Test
+ public void testJdkSslServerWithEncryptedPrivateKey2() throws SSLException {
+ File keyFile = new File(getClass().getResource("test2_encrypted.pem").getFile());
+ File crtFile = new File(getClass().getResource("test2.crt").getFile());
+
+ new JdkSslServerContext(crtFile, keyFile, "12345");
+ }
+
@Test
public void testJdkSslServerWithUnencryptedPrivateKey() throws SSLException {
File keyFile = new File(getClass().getResource("test_unencrypted.pem").getFile());
diff --git a/handler/src/test/resources/io/netty/handler/ssl/test2.crt b/handler/src/test/resources/io/netty/handler/ssl/test2.crt
new file mode 100644
index 00000000000..7f4b30d446d
--- /dev/null
+++ b/handler/src/test/resources/io/netty/handler/ssl/test2.crt
@@ -0,0 +1,18 @@
+-----BEGIN CERTIFICATE-----
+MIIC6DCCAdACCQCp0Mn/2UCl2TANBgkqhkiG9w0BAQsFADA2MTQwMgYDVQQDDCtj
+ZmYyNGEwY2I4NGFmNjExZDdhODFjMGI4MDY4OTA2OC5uZXR0eS50ZXN0MB4XDTE0
+MTAxNzE4NDczM1oXDTE0MTExNjE4NDczM1owNjE0MDIGA1UEAwwrY2ZmMjRhMGNi
+ODRhZjYxMWQ3YTgxYzBiODA2ODkwNjgubmV0dHkudGVzdDCCASIwDQYJKoZIhvcN
+AQEBBQADggEPADCCAQoCggEBALgddI5XJcUK45ONr4QTfZZxbJJeOYKPEWVIWK/P
+Wz6EJXt3hDdpmnaRUKAv4mMIFlxWVkxTqa/dB3hjcm5hPvNgPAUaEWzMtGd32p95
+sJzbxiWvxhf5rqF0n1Zk5KX+EcasiCupNg3TL7gfTSSZfaGSWf460oaCS6WCU4X9
+XTUhys7N5BFM+uQLE048CnkBCO1An980Fau/0+BLXgW+iJC6XWTJbpZ+r7rDpBKl
++HmQQ5tgGlCZcnhmS9bzYT3hoag6JkDoIwbFsVOkwemxZGb8GsGE74/rrzUJ9MdR
+/ETCA2km1na6ESst0/wm0qD3clJahP8xEoaJ+W1TFGizRWkCAwEAATANBgkqhkiG
+9w0BAQsFAAOCAQEAmeGPRWXzu7+f20ZJA/u6WjcmsUhSTtt0YcBNtii4Pm0snIE9
+UyRBGlvS2uFHTilD7MOYOHX6ATlHZAsfegpiPE5jCvE4CzFPpQaVAT/sKNtsWH43
+ZQHn4NK1DAFIVDysO3AGGhL0mub8iWEYHs81+6tSSFlbDFqwYtw7ueerhVLUIaIa
+S0SvtXUVitX2LzMlYCEto2s50fcqOnj8uve/dG8BmiwR1DqqVKkAWAXf8uGhwwD+
+659E3g9vNz6QUchd8K/TIv52i8EDuWu3FElohmfFUXu43A+Z+lbuDrEW3suqTC3y
+0JIa2DfHWA7WTyF4UD32aAC+U6BLIOA6WoPi1Q==
+-----END CERTIFICATE-----
diff --git a/handler/src/test/resources/io/netty/handler/ssl/test2_encrypted.pem b/handler/src/test/resources/io/netty/handler/ssl/test2_encrypted.pem
new file mode 100644
index 00000000000..a17f9dc27eb
--- /dev/null
+++ b/handler/src/test/resources/io/netty/handler/ssl/test2_encrypted.pem
@@ -0,0 +1,29 @@
+-----BEGIN ENCRYPTED PRIVATE KEY-----
+MIIE6jAcBgoqhkiG9w0BDAEDMA4ECCqT2dycwPtCAgIIAASCBMg/Z60Q85kVL5kv
+q8WIIY9tbXo/2Q+6rspxdit9SRd86MV9QRdfZ5Vjwt0JTa+Rd1gMaNK4PySW23bq
+F2+dD0sjVBcE24Qg0h4BcmL+YBdTftBfk7NDH/rHhsew7DZru9fdDvkO9bV3jXIz
+fARW9U7JIfgAi6CfJ8Q1PS7sg6dVtrcjMRIie32x0TSbZrn+h9AaXpLHsC8oXiyY
+BhWe4i9B7PobyJ0r/CTBFhbfUCGwRyHac0+bZXvlcwX9wy3W7jagc6RDlznOpowU
+FP35CQGeKsJ9WD+yy5MU8X8M8v+eeaJk4oX+PSWJX669CxbYocVP/+LUtOXpe+4h
+7yMmVNLUtsgBlY6tNsU0XBQkrqqb+voSxVBEVZ1WTKgLWsE/EiQ2P2GU8Gnr+J6c
+/yHxw0D4q9J3jV40SiuXQlgFwlf8u9FuVjOcGxTidfKXyvNqPKqgkf9QD+7E09q3
+JQoNbI/A8BXrpdx9h87Gt0TblPwVJP2nf5whig9W62R4y9SWybUUNr2MFNkvEfKe
+1QK8isf+HlvIO+VBYi4jof9HkWLwnAszlkpC+k1cOiSjNRn8QyLzsqX7A/VuS6W8
+6kKeND4yRNA4b7rfQqhyGg7gBwiwN+22UF6SKiikX4TB1ZyLdzlbPe0L+X/Gq0Jz
+Kf+8/slgzB5K9WpDtKsARH/lRPAx1rcascvFxMuCJL5O9MO9l4xWDJor71WgPC2N
+KwXxvEW3Kyvs3pSgWc8MC0BKcD9WIAahAlAVmSQBxDNWvJlGTgUVhzPqan7h03Fd
+nWAxSn315ObfK9rjbqUBO9x/nkSZFS9nApmeiWkOIwVzgNfAfb9md07TYyC/rpK3
+nGIsThekqqQULMQaAPmEFqUj6A/0KlpBj1gZwddYvVvEL/MuQO0QBdz4n/OncxYP
+TVoQEqXsndmNQnkuk2Kr4FACV2M9rbr84HJUIZVGGVSM5h80GrRqK03qpTzM8Nkc
+e04R4KDpLDKHm+G4xYZbbraIGXNTkhxTqdNA2FyjJWFurmpQyFay55vC6WBFBVNA
+BGVIqD1/9K3dJJGlpiHyymRCK9YGvflZlSr7dm7PW7PPEthwTijbAHkABOKsFSiu
+xaUj027WIVuDb5FFIAaF3Wmn4GFXvsSH+8L95CQuXGB8J/5Buo+/Hg6S7PeDwrf+
+qNRAfg9vxo+AZOWpWfGEYGHQeX6BxVjdffar9RwL99cele4h2FgBLtIuAXvgLPyx
+b+MIjDliCe1Nqx0PCCuaB1xRnaKiwbl7itDidzI8BUAaFcKxbBH2lpr44+vYPVHb
+70Xrw55RLvrVYKAcaZgryTNOvbRatifJIMg3kf8V++2rwUMoZ+DQfXin/C4S/2/b
+c6I1OvYaGxmI1YiI6qSpOryDSzTNlDEWcdh5feuixiP5RbyaQFswq2fH0hsWWHS4
+OsCeqT0nm5vd1CdUFQJ4Nuh/TTdgCAVKk5yJZJvH2BX77I2d4T0ZRGHLDKUm8P0E
+n6ntrMqLFR+QooONAZg0DTaxvbsCvaupRJCn9NgiwtXyYJKbvf5F8NEOe57NoGwd
+LqQ332mVTuJ1DiqnChLoe7Mz7OY21RsTa/AK5Q/onClvBATrLD0ynK4WiLn4+hGs
+HK5t3audgdnrLxs4UoA=
+-----END ENCRYPTED PRIVATE KEY-----
diff --git a/handler/src/test/resources/io/netty/handler/ssl/test2_unencrypted.pem b/handler/src/test/resources/io/netty/handler/ssl/test2_unencrypted.pem
new file mode 100644
index 00000000000..209a9c05bec
--- /dev/null
+++ b/handler/src/test/resources/io/netty/handler/ssl/test2_unencrypted.pem
@@ -0,0 +1,28 @@
+-----BEGIN PRIVATE KEY-----
+MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC4HXSOVyXFCuOT
+ja+EE32WcWySXjmCjxFlSFivz1s+hCV7d4Q3aZp2kVCgL+JjCBZcVlZMU6mv3Qd4
+Y3JuYT7zYDwFGhFszLRnd9qfebCc28Ylr8YX+a6hdJ9WZOSl/hHGrIgrqTYN0y+4
+H00kmX2hkln+OtKGgkulglOF/V01IcrOzeQRTPrkCxNOPAp5AQjtQJ/fNBWrv9Pg
+S14FvoiQul1kyW6Wfq+6w6QSpfh5kEObYBpQmXJ4ZkvW82E94aGoOiZA6CMGxbFT
+pMHpsWRm/BrBhO+P6681CfTHUfxEwgNpJtZ2uhErLdP8JtKg93JSWoT/MRKGiflt
+UxRos0VpAgMBAAECggEAYiTZd/L+oD3AuGwjrp0RKjwGKzPtJiqLlFjvZbB8LCQX
+Muyv3zX8781glDNSU4YBHXGsiP1kC+ofzE3+ttZBz0xyUinmNgAc/rbGJJKi0crZ
+okdDqo4fR9O6CDy6Ib4Azc40vEl0FgSIgHa3EZZ8gL9aF4pVpPwZxP1m9prrr6EP
+SOlJP7rJNA/sTpuy0gz+UAu2Xf53pdkREUW7E2uzIGwrHxQVserN7Xxtft/zT79/
+oIHF09pHfiqE8a2TuVvVavjwV6787PSewFs7j8iKId9bpo1O7iqvj0UKOE+/63Lf
+1pWRn7lRGS9ACw8EoyTY/M0njUbDEfaObJUzt08pjQKBgQDevZLRQjbGDtKOfQe6
+PKb/6PeFEE466NPFKH1bEz26VmC5vzF8U7lk71S11Dma51+vbOENzS5VlqOWqO+N
+CyXTzb8a0rHXXUEP4+V6CazesTOEoBKViDswt2ffJfQYoCOFfKrcKq0j1Ps8Svhq
+yzcMjAfX8eKIDWxK3qk+09SBtwKBgQDTm2Te4ENYwV5be+Z5L5See8gHNU5w3RtU
+koO54TYBeJOTsTTtGDqEg60MoWIcx69OAJlHwTp5nPV5fhrjB8I9WUmI+2sPK7sU
+OmhV/QzPjr6HW7fpbvbZ6fT+/Ay3aREa+qsJMypXsoqML1/fAeBno3hvHQt5Neog
+leu3m0/x3wKBgQCCc8b8FeqcfuvkleejtIgeU2Q8I3ud1uTIkNkyMQezDYni385s
+wWBQdDdJsvz181LAHGWGvsfHSs2OnGyIT6Ic9WBaplGQD8beNpwcqHP9jQzePR4F
+Q99evdvw/nqCva9wK76p6bizxrZJ7qKlcVVRXOXvHHSPOEVXaCb5a/kG6wKBgGN6
+2G8XC1I8hfmIRA+Q2NOw6ZbJ7riMmf6mapsGT3ddkjOKyZD1JP2LUd1wOUnCbp3D
+FkxvgOgPbC/Toxw8V4qz4Sgu2mPlcSvPUaGrN0yUlOnZqpppek9z96OwJuJK2KnQ
+Unweu7dCznOdCfszTKYsacAC7ZPsTsdG8+v7bhgNAoGBAL8wlTp3tfQ2iuGDnQaf
+268BBUtqp2qPlGPXCdkc5XXbnHXLFY/UYGw27Vh+UNW8UORTFYEb8XPvUxB4q2Mx
+8ZZdcjFB1J4dM2+KGr51CEuzzpFuhFU8Nn4D/hcfYNKg733gTeSoI0Gs2Y9R+bDo
++cA9UxmyFSgS+Dq/7BOmPCDI
+-----END PRIVATE KEY-----
| train | train | 2014-10-31T00:39:31 | 2014-10-16T16:32:49Z | Scottmitch | val |
netty/netty/3057_3058 | netty/netty | netty/netty/3057 | netty/netty/3058 | [
"timestamp(timedelta=3773.0, similarity=0.886181733809813)"
] | a9bd9699a49ee79602414e9ee9a88da42bcd371b | 204b45c8e1c61544283cdddc34fc70470d3f661f | [
"Hi @Lekanich and thanks for reaching out. We will need a few more details to narrow in on the issue:\n- What version of netty are you using?\n- What is your system configuration (`mvn --version`)?\n- Are you using openssl or JDK providers for SSL?\n- Where is the location in the code that the buffer is not being ... | [
"use `newDirectIn.internalNioBuffer(0, newDirectIn.readableBytes()` . This way you also not need to reset the reader index.\n",
"Good call. Done.\n"
] | 2014-10-27T17:49:49Z | [
"defect"
] | SslHandler - DirectByteBuffer - OutOfMemory | I tested the performance of my project with and without SSL (configuration: 25 threads that run 10 more threads (total 250) each of them refers to the server for download the file).
Test without Ssl completed without error, but the test with Ssl threw </code>OutOfMemoryError</code> when SslHandler trying to create the new DirectByteBuffer in his methods <code>wrap()</code> (<code>Bits.reserveMemory</code>).
System Monitor shows memory usage in 10 times higher than the same test without using Ssl.
Perhaps you need to add a method call <code>Util.free(ByteBuffer)</code> for sended <code>DirectByteBuffer</code> and thus exempt are reserved memory? Because I haven't found such place in which someone performs this action and free direct memory.
PS (In my test I send and receive over 5GB of data for test)
| [
"handler/src/main/java/io/netty/handler/ssl/SslHandler.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/SslHandler.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java
index 40ef0ce5637..9ea071d5292 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java
@@ -436,6 +436,7 @@ public void flush(ChannelHandlerContext ctx) throws Exception {
private void wrap(ChannelHandlerContext ctx, boolean inUnwrap) throws SSLException {
ByteBuf out = null;
ChannelPromise promise = null;
+ ByteBufAllocator alloc = ctx.alloc();
try {
for (;;) {
Object msg = pendingUnencryptedWrites.current();
@@ -453,7 +454,7 @@ private void wrap(ChannelHandlerContext ctx, boolean inUnwrap) throws SSLExcepti
out = allocateOutNetBuf(ctx, buf.readableBytes());
}
- SSLEngineResult result = wrap(engine, buf, out);
+ SSLEngineResult result = wrap(alloc, engine, buf, out);
if (!buf.isReadable()) {
promise = pendingUnencryptedWrites.remove();
@@ -519,12 +520,13 @@ private void finishWrap(ChannelHandlerContext ctx, ByteBuf out, ChannelPromise p
private void wrapNonAppData(ChannelHandlerContext ctx, boolean inUnwrap) throws SSLException {
ByteBuf out = null;
+ ByteBufAllocator alloc = ctx.alloc();
try {
for (;;) {
if (out == null) {
out = allocateOutNetBuf(ctx, 0);
}
- SSLEngineResult result = wrap(engine, Unpooled.EMPTY_BUFFER, out);
+ SSLEngineResult result = wrap(alloc, engine, Unpooled.EMPTY_BUFFER, out);
if (result.bytesProduced() > 0) {
ctx.write(out);
@@ -574,26 +576,37 @@ private void wrapNonAppData(ChannelHandlerContext ctx, boolean inUnwrap) throws
}
}
- private SSLEngineResult wrap(SSLEngine engine, ByteBuf in, ByteBuf out) throws SSLException {
- ByteBuffer in0 = in.nioBuffer();
- if (!in0.isDirect()) {
- ByteBuffer newIn0 = ByteBuffer.allocateDirect(in0.remaining());
- newIn0.put(in0).flip();
- in0 = newIn0;
- }
+ private SSLEngineResult wrap(ByteBufAllocator alloc, SSLEngine engine, ByteBuf in, ByteBuf out)
+ throws SSLException {
+ ByteBuf newDirectIn = null;
+ try {
+ final ByteBuffer in0;
+ if (in.isDirect()) {
+ in0 = in.nioBuffer();
+ } else {
+ int readableBytes = in.readableBytes();
+ newDirectIn = alloc.directBuffer(readableBytes);
+ newDirectIn.writeBytes(in, in.readerIndex(), readableBytes);
+ in0 = newDirectIn.internalNioBuffer(0, readableBytes);
+ }
- for (;;) {
- ByteBuffer out0 = out.nioBuffer(out.writerIndex(), out.writableBytes());
- SSLEngineResult result = engine.wrap(in0, out0);
- in.skipBytes(result.bytesConsumed());
- out.writerIndex(out.writerIndex() + result.bytesProduced());
+ for (;;) {
+ ByteBuffer out0 = out.nioBuffer(out.writerIndex(), out.writableBytes());
+ SSLEngineResult result = engine.wrap(in0, out0);
+ in.skipBytes(result.bytesConsumed());
+ out.writerIndex(out.writerIndex() + result.bytesProduced());
- switch (result.getStatus()) {
+ switch (result.getStatus()) {
case BUFFER_OVERFLOW:
out.ensureWritable(maxPacketBufferSize);
break;
default:
return result;
+ }
+ }
+ } finally {
+ if (newDirectIn != null) {
+ newDirectIn.release();
}
}
}
| null | train | train | 2014-10-25T14:48:34 | 2014-10-27T14:07:32Z | Lekanich | val |
netty/netty/3072_3073 | netty/netty | netty/netty/3072 | netty/netty/3073 | [
"timestamp(timedelta=18.0, similarity=0.9003473515919903)"
] | 1914b77c71ff820b8c4ae0b5a3a06091b7fc82d1 | db05c5415aae3e61109b8fca439933ee9ee3fbc6 | [
"@normanmaurer and @trustin - Thoughts?\n",
"PR https://github.com/netty/netty/pull/3073 demonstrates using the `wantsDirectBuffer` variable to prevent requiring a direct byte buffer on every wrap. This could be a big savings if non-direct byte buffers are used and direct byte buffers are not required (i.e. JDK ... | [
"This sounds legit... @trustin agree ? \n"
] | 2014-10-29T18:13:18Z | [
"improvement"
] | SslHandler wrap unconditionally copying to direct buffer | In light of a recent bug fix (https://github.com/netty/netty/pull/3058) it became evident that the SslHander wrap method was (and is still) unconditionally requiring a direct ByteBuffer for the input buffer to the SSLEngine.wrap() operation [here](https://github.com/netty/netty/blob/4.0/handler/src/main/java/io/netty/handler/ssl/SslHandler.java#L584). Is it necessary (or even beneficial) to always do the the extra allocation and copy? Should this conversion be conditional based upon the [wantsDirectBuffer](https://github.com/netty/netty/blob/4.0/handler/src/main/java/io/netty/handler/ssl/SslHandler.java#L187) boolean member variable, or some other condition?
| [
"handler/src/main/java/io/netty/handler/ssl/SslHandler.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/SslHandler.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java
index 9ea071d5292..3b683ff1e0c 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslHandler.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslHandler.java
@@ -182,7 +182,7 @@ public class SslHandler extends ByteToMessageDecoder implements ChannelOutboundH
// BEGIN Platform-dependent flags
/**
- * {@code trus} if and only if {@link SSLEngine} expects a direct buffer.
+ * {@code true} if and only if {@link SSLEngine} expects a direct buffer.
*/
private final boolean wantsDirectBuffer;
/**
@@ -581,7 +581,7 @@ private SSLEngineResult wrap(ByteBufAllocator alloc, SSLEngine engine, ByteBuf i
ByteBuf newDirectIn = null;
try {
final ByteBuffer in0;
- if (in.isDirect()) {
+ if (in.isDirect() || !wantsDirectBuffer) {
in0 = in.nioBuffer();
} else {
int readableBytes = in.readableBytes();
| null | train | train | 2014-10-29T11:48:40 | 2014-10-29T18:05:03Z | Scottmitch | val |
netty/netty/3084_3086 | netty/netty | netty/netty/3084 | netty/netty/3086 | [
"timestamp(timedelta=233.0, similarity=0.9226550345829467)"
] | 8235337d4e82ea8475e15e90789119caea48e159 | d7e145bd01d3bf81523233c4440039803225ccca | [
"@nmittler - FYI. This may be more general problem to whenever the windows sizes are used. We should be handling the case where they can be negative.\n",
"@Scottmitch yup, agreed. I'll take a look.\n",
"Opps sorry about stealing the assignment. I have a fix I think.\n",
"I'll submit a PR and we can go fro... | [] | 2014-10-31T18:14:13Z | [
"defect"
] | HTTP/2 outbound flow controller negative write size exception | We are not handling the case where the `writableWindow()` can be negative in the `DefaultHttp2OutboundFlowController` which is resulting in an `IllegalArgumentException` exception when trying to get a negatively sized buffer.
The `writableWindow()` is negative [here](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java#L504) and results in a negative slice [here](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java#L656) which generates the following stack trace:
``` bash
java.lang.IllegalArgumentException: maxCapacity: -2356 (expected: >= 0)
at io.netty.buffer.AbstractByteBuf.<init>(AbstractByteBuf.java:51)
at io.netty.buffer.AbstractDerivedByteBuf.<init>(AbstractDerivedByteBuf.java:28)
at io.netty.buffer.SlicedByteBuf.<init>(SlicedByteBuf.java:40)
at io.netty.buffer.AbstractByteBuf.slice(AbstractByteBuf.java:910)
at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:641)
at io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController$OutboundFlowState$Frame.split(DefaultHttp2OutboundFlowController.java:656)
at io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController$OutboundFlowState.writeBytes(DefaultHttp2OutboundFlowController.java:518)
at io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController$OutboundFlowState.access$500(DefaultHttp2OutboundFlowController.java:379)
at io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController.writeAllowedBytes(DefaultHttp2OutboundFlowController.java:271)
at io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController.writeAllowedBytes(DefaultHttp2OutboundFlowController.java:289)
at io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController.writePendingBytes(DefaultHttp2OutboundFlowController.java:253)
at io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController.updateOutboundWindowSize(DefaultHttp2OutboundFlowController.java:145)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionEncoder.updateOutboundWindowSize(DefaultHttp2ConnectionEncoder.java:437)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onWindowUpdateRead(DefaultHttp2ConnectionDecoder.java:500)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readWindowUpdateFrame(DefaultHttp2FrameReader.java:552)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:243)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:126)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:129)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:372)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:248)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:148)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:390)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:936)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:819)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:248)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:148)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:390)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:897)
at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:718)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:328)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:122)
at io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
at io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
at io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
at io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
at io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
at io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)
```
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
index e3d55cc7ce4..bfaa70327db 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
@@ -147,8 +147,9 @@ public void updateOutboundWindowSize(int streamId, int delta) throws Http2Except
// Update the stream window and write any pending frames for the stream.
OutboundFlowState state = stateOrFail(streamId);
state.incrementStreamWindow(delta);
- state.writeBytes(state.writableWindow());
- flush();
+ if (state.writeBytes(state.writableWindow()) > 0) {
+ flush();
+ }
}
}
@@ -508,7 +509,7 @@ private int writeBytes(int bytes) throws Http2Exception {
// Window size is large enough to send entire data frame
bytesWritten += pendingWrite.size();
pendingWrite.write();
- } else if (maxBytes == 0) {
+ } else if (maxBytes <= 0) {
// No data from the current frame can be written - we're done.
// We purposely check this after first testing the size of the
// pending frame to properly handle zero-length frame.
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
index 3fe2e0b2092..b026243bbb3 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
@@ -298,6 +298,55 @@ public void initialWindowUpdateShouldSendFrame() throws Http2Exception {
}
}
+ @Test
+ public void negativeWindowShouldNotThrowException() throws Http2Exception {
+ final int initWindow = 20;
+ final int secondWindowSize = 10;
+ controller.initialOutboundWindowSize(initWindow);
+ Http2Stream streamA = connection.stream(STREAM_A);
+
+ final ByteBuf data = dummyData(initWindow, 0);
+ final ByteBuf data2 = dummyData(5, 0);
+ try {
+ // Deplete the stream A window to 0
+ send(STREAM_A, data.slice(0, initWindow), 0);
+ verifyWrite(STREAM_A, data.slice(0, initWindow), 0);
+
+ // Make the window size for stream A negative
+ controller.initialOutboundWindowSize(initWindow - secondWindowSize);
+ assertEquals(-secondWindowSize, streamA.outboundFlow().window());
+
+ // Queue up a write. It should not be written now because the window is negative
+ resetFrameWriter();
+ send(STREAM_A, data2.slice(), 0);
+ verifyNoWrite(STREAM_A);
+
+ // Open the window size back up a bit (no send should happen)
+ controller.updateOutboundWindowSize(STREAM_A, 5);
+ assertEquals(-5, streamA.outboundFlow().window());
+ verifyNoWrite(STREAM_A);
+
+ // Open the window size back up a bit (no send should happen)
+ controller.updateOutboundWindowSize(STREAM_A, 5);
+ assertEquals(0, streamA.outboundFlow().window());
+ verifyNoWrite(STREAM_A);
+
+ // Open the window size back up and allow the write to happen
+ controller.updateOutboundWindowSize(STREAM_A, 5);
+ assertEquals(0, streamA.outboundFlow().window());
+
+ // Verify that the entire frame was sent.
+ ArgumentCaptor<ByteBuf> argument = ArgumentCaptor.forClass(ByteBuf.class);
+ captureWrite(STREAM_A, argument, 0, false);
+ final ByteBuf writtenBuf = argument.getValue();
+ assertEquals(data2, writtenBuf);
+ assertEquals(1, data2.refCnt());
+ } finally {
+ manualSafeRelease(data);
+ manualSafeRelease(data2);
+ }
+ }
+
@Test
public void initialWindowUpdateShouldSendEmptyFrame() throws Http2Exception {
controller.initialOutboundWindowSize(0);
| train | train | 2014-10-31T18:59:25 | 2014-10-31T15:46:11Z | Scottmitch | val |
netty/netty/3085_3088 | netty/netty | netty/netty/3085 | netty/netty/3088 | [
"timestamp(timedelta=72577.0, similarity=0.958106215902029)"
] | d5042baf58293db8429ef0c42137edc66c266020 | b9cfb62ea686911595f3c652c2a566216c233506 | [
"Good catch\n",
"Addressed by #3088\n",
"Cherry-picked by @nmittler \n"
] | [
"Wording is a bit awkward. Consider clarifying..\n\nReturns the most {@link ChannelFuture} -> Returns the {@link ChannelFuture}\n",
"What about continuation frames? Do we even send continuation frames or are we queuing up header writes until the endofstream happens somewhere?\n",
"Is this needed if you are on... | 2014-10-31T21:17:34Z | [
"defect"
] | HTTP/2 HEADERS frames need to stay in-place WRT DATA frames. | Since DATA frames are flow controlled and HEADERS are not, a HEADERS frame sent after DATA (e.g. trailers) can be sent out-of-order due to flow control restrictions. We need to preserve the original ordering.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java",
"codec-http2/src/te... | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index 70f2d6592e7..5e6be2a8a11 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -22,6 +22,7 @@
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
+import io.netty.channel.ChannelPromiseAggregator;
import java.util.ArrayDeque;
@@ -196,6 +197,11 @@ public void operationComplete(ChannelFuture future) throws Exception {
return future;
}
+ @Override
+ public ChannelFuture lastWriteForStream(int streamId) {
+ return outboundFlow.lastWriteForStream(streamId);
+ }
+
@Override
public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int padding,
boolean endStream, ChannelPromise promise) {
@@ -203,10 +209,12 @@ public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2
}
@Override
- public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers headers,
- int streamDependency, short weight, boolean exclusive, int padding, boolean endOfStream,
- ChannelPromise promise) {
+ public ChannelFuture writeHeaders(final ChannelHandlerContext ctx, final int streamId,
+ final Http2Headers headers, final int streamDependency, final short weight,
+ final boolean exclusive, final int padding, final boolean endOfStream,
+ final ChannelPromise promise) {
Http2Stream stream = connection.stream(streamId);
+ ChannelFuture lastDataWrite = lastWriteForStream(streamId);
try {
if (connection.isGoAway()) {
throw protocolError("Sending headers after connection going away.");
@@ -238,6 +246,12 @@ public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2
"Stream %d in unexpected state: %s", stream.id(), stream.state()));
}
}
+
+ if (lastDataWrite != null && !endOfStream) {
+ throw new IllegalStateException(
+ "Sending non-trailing headers after data has been sent for stream: "
+ + streamId);
+ }
} catch (Throwable e) {
if (e instanceof Http2NoMoreStreamIdsException) {
lifecycleManager.onException(ctx, e);
@@ -245,8 +259,49 @@ public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2
return promise.setFailure(e);
}
+ if (lastDataWrite == null) {
+ // No previous DATA frames to keep in sync with, just send it now.
+ return writeHeaders(ctx, stream, headers, streamDependency, weight, exclusive, padding,
+ endOfStream, promise);
+ }
+
+ // There were previous DATA frames sent. We need to send the HEADERS only after the most
+ // recent DATA frame to keep them in sync...
+
+ // Wrap the original promise in an aggregate which will complete the original promise
+ // once the headers are written.
+ final ChannelPromiseAggregator aggregatePromise = new ChannelPromiseAggregator(promise);
+ final ChannelPromise innerPromise = ctx.newPromise();
+ aggregatePromise.add(innerPromise);
+
+ // Only write the HEADERS frame after the previous DATA frame has been written.
+ final Http2Stream theStream = stream;
+ lastDataWrite.addListener(new ChannelFutureListener() {
+ @Override
+ public void operationComplete(ChannelFuture future) throws Exception {
+ if (!future.isSuccess()) {
+ // The DATA write failed, also fail this write.
+ innerPromise.setFailure(future.cause());
+ return;
+ }
+
+ // Perform the write.
+ writeHeaders(ctx, theStream, headers, streamDependency, weight, exclusive, padding,
+ endOfStream, innerPromise);
+ }
+ });
+
+ return promise;
+ }
+
+ /**
+ * Writes the given {@link Http2Headers} to the remote endpoint and updates stream state if appropriate.
+ */
+ private ChannelFuture writeHeaders(ChannelHandlerContext ctx, Http2Stream stream,
+ Http2Headers headers, int streamDependency, short weight, boolean exclusive,
+ int padding, boolean endOfStream, ChannelPromise promise) {
ChannelFuture future =
- frameWriter.writeHeaders(ctx, streamId, headers, streamDependency, weight,
+ frameWriter.writeHeaders(ctx, stream.id(), headers, streamDependency, weight,
exclusive, padding, endOfStream, promise);
ctx.flush();
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
index e3d55cc7ce4..8afb28a352c 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowController.java
@@ -202,8 +202,14 @@ public ChannelFuture writeData(ChannelHandlerContext ctx, int streamId, ByteBuf
return promise;
}
+ @Override
+ public ChannelFuture lastWriteForStream(int streamId) {
+ OutboundFlowState state = state(streamId);
+ return state != null ? state.lastNewFrame() : null;
+ }
+
private static OutboundFlowState state(Http2Stream stream) {
- return (OutboundFlowState) stream.outboundFlow();
+ return stream != null ? (OutboundFlowState) stream.outboundFlow() : null;
}
private OutboundFlowState connectionState() {
@@ -383,6 +389,7 @@ final class OutboundFlowState implements FlowState {
private int pendingBytes;
private int priorityBytes;
private int allocatedPriorityBytes;
+ private ChannelFuture lastNewFrame;
private OutboundFlowState(Http2Stream stream) {
this.stream = stream;
@@ -411,6 +418,13 @@ private int incrementStreamWindow(int delta) throws Http2Exception {
return window;
}
+ /**
+ * Returns the future for the last new frame created for this stream.
+ */
+ ChannelFuture lastNewFrame() {
+ return lastNewFrame;
+ }
+
/**
* Returns the maximum writable window (minimum of the stream and connection windows).
*/
@@ -460,6 +474,8 @@ private int unallocatedPriorityBytes() {
* Creates a new frame with the given values but does not add it to the pending queue.
*/
private Frame newFrame(ChannelPromise promise, ByteBuf data, int padding, boolean endStream) {
+ // Store this as the future for the most recent write attempt.
+ lastNewFrame = promise;
return new Frame(new ChannelPromiseAggregator(promise), data, padding, endStream);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java
index d9b17b8f8c9..a7e4d21c37a 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2OutboundFlowController.java
@@ -39,6 +39,12 @@ public interface Http2OutboundFlowController extends Http2DataWriter {
ChannelFuture writeData(ChannelHandlerContext ctx, int streamId, ByteBuf data, int padding,
boolean endStream, ChannelPromise promise);
+ /**
+ * Returns the {@link ChannelFuture} for the most recent write for the given
+ * stream. If no previous write for the stream has occurred, returns {@code null}.
+ */
+ ChannelFuture lastWriteForStream(int streamId);
+
/**
* Sets the initial size of the connection's outbound flow control window. The outbound flow
* control windows for all streams are updated by the delta in the initial window size. This is
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
index 1ca69bcb84a..8410ac2939b 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
@@ -44,6 +44,7 @@
import io.netty.channel.DefaultChannelPromise;
import java.util.Collections;
+import java.util.concurrent.atomic.AtomicReference;
import org.junit.Before;
import org.junit.Test;
@@ -205,24 +206,55 @@ public void headersWriteAfterGoAwayShouldFail() throws Exception {
@Test
public void headersWriteForUnknownStreamShouldCreateStream() throws Exception {
- when(local.createStream(eq(5), eq(false))).thenReturn(stream);
- encoder.writeHeaders(ctx, 5, EmptyHttp2Headers.INSTANCE, 0, false, promise);
- verify(local).createStream(eq(5), eq(false));
- verify(writer).writeHeaders(eq(ctx), eq(5), eq(EmptyHttp2Headers.INSTANCE), eq(0),
+ int streamId = 5;
+ when(stream.id()).thenReturn(streamId);
+ mockFutureAddListener(true);
+ when(local.createStream(eq(streamId), eq(false))).thenReturn(stream);
+ encoder.writeHeaders(ctx, streamId, EmptyHttp2Headers.INSTANCE, 0, false, promise);
+ verify(local).createStream(eq(streamId), eq(false));
+ verify(writer).writeHeaders(eq(ctx), eq(streamId), eq(EmptyHttp2Headers.INSTANCE), eq(0),
eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(false), eq(promise));
}
@Test
public void headersWriteShouldCreateHalfClosedStream() throws Exception {
- when(local.createStream(eq(5), eq(true))).thenReturn(stream);
- encoder.writeHeaders(ctx, 5, EmptyHttp2Headers.INSTANCE, 0, true, promise);
- verify(local).createStream(eq(5), eq(true));
- verify(writer).writeHeaders(eq(ctx), eq(5), eq(EmptyHttp2Headers.INSTANCE), eq(0),
+ int streamId = 5;
+ when(stream.id()).thenReturn(5);
+ mockFutureAddListener(true);
+ when(local.createStream(eq(streamId), eq(true))).thenReturn(stream);
+ encoder.writeHeaders(ctx, streamId, EmptyHttp2Headers.INSTANCE, 0, true, promise);
+ verify(local).createStream(eq(streamId), eq(true));
+ verify(writer).writeHeaders(eq(ctx), eq(streamId), eq(EmptyHttp2Headers.INSTANCE), eq(0),
+ eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(true), eq(promise));
+ }
+
+ @Test
+ public void headersWriteAfterDataShouldWait() throws Exception {
+ final AtomicReference<ChannelFutureListener> listener = new AtomicReference<ChannelFutureListener>();
+ doAnswer(new Answer<Void>() {
+ @Override
+ public Void answer(InvocationOnMock invocation) throws Throwable {
+ listener.set((ChannelFutureListener) invocation.getArguments()[0]);
+ return null;
+ }
+ }).when(future).addListener(any(ChannelFutureListener.class));
+
+ // Indicate that there was a previous data write operation that the headers must wait for.
+ when(outboundFlow.lastWriteForStream(anyInt())).thenReturn(future);
+ encoder.writeHeaders(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, 0, true, promise);
+ verify(writer, never()).writeHeaders(eq(ctx), eq(STREAM_ID), eq(EmptyHttp2Headers.INSTANCE), eq(0),
+ eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(true), eq(promise));
+
+ // Now complete the previous data write operation and verify that the headers were written.
+ when(future.isSuccess()).thenReturn(true);
+ listener.get().operationComplete(future);
+ verify(writer).writeHeaders(eq(ctx), eq(STREAM_ID), eq(EmptyHttp2Headers.INSTANCE), eq(0),
eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(true), eq(promise));
}
@Test
public void headersWriteShouldOpenStreamForPush() throws Exception {
+ mockFutureAddListener(true);
when(stream.state()).thenReturn(RESERVED_LOCAL);
encoder.writeHeaders(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, 0, false, promise);
verify(stream).openForPush();
@@ -233,6 +265,7 @@ public void headersWriteShouldOpenStreamForPush() throws Exception {
@Test
public void headersWriteShouldClosePushStream() throws Exception {
+ mockFutureAddListener(true);
when(stream.state()).thenReturn(RESERVED_LOCAL).thenReturn(HALF_CLOSED_LOCAL);
encoder.writeHeaders(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, 0, true, promise);
verify(stream).openForPush();
@@ -315,6 +348,21 @@ public void settingsWriteShouldNotUpdateSettings() throws Exception {
verify(writer).writeSettings(eq(ctx), eq(settings), eq(promise));
}
+ private void mockFutureAddListener(boolean success) {
+ when(future.isSuccess()).thenReturn(success);
+ if (!success) {
+ when(future.cause()).thenReturn(new Exception("Fake Exception"));
+ }
+ doAnswer(new Answer<Void>() {
+ @Override
+ public Void answer(InvocationOnMock invocation) throws Throwable {
+ ChannelFutureListener listener = (ChannelFutureListener) invocation.getArguments()[0];
+ listener.operationComplete(future);
+ return null;
+ }
+ }).when(future).addListener(any(ChannelFutureListener.class));
+ }
+
private static ByteBuf dummyData() {
// The buffer is purposely 8 bytes so it will even work for a ping frame.
return wrappedBuffer("abcdefgh".getBytes(UTF_8));
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
index 3fe2e0b2092..20f5314ba52 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2OutboundFlowControllerTest.java
@@ -29,6 +29,7 @@
import static org.mockito.Mockito.when;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
+import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.http2.DefaultHttp2OutboundFlowController.OutboundFlowState;
@@ -139,6 +140,24 @@ public void frameShouldBeSentImmediately() throws Http2Exception {
}
}
+ @Test
+ public void lastWriteFutureShouldBeSaved() throws Http2Exception {
+ ChannelPromise promise2 = Mockito.mock(ChannelPromise.class);
+ final ByteBuf data = dummyData(5, 5);
+ try {
+ // Write one frame.
+ ChannelFuture future1 = controller.writeData(ctx, STREAM_A, data, 0, false, promise);
+ assertEquals(future1, controller.lastWriteForStream(STREAM_A));
+
+ // Now write another and verify that the last write is updated.
+ ChannelFuture future2 = controller.writeData(ctx, STREAM_A, data, 0, false, promise2);
+ assertTrue(future1 != future2);
+ assertEquals(future2, controller.lastWriteForStream(STREAM_A));
+ } finally {
+ manualSafeRelease(data);
+ }
+ }
+
@Test
public void frameLargerThanMaxFrameSizeShouldBeSplit() throws Http2Exception {
when(frameWriterSizePolicy.maxFrameSize()).thenReturn(3);
@@ -1174,7 +1193,8 @@ private static int calculateStreamSizeSum(IntObjectMap<Integer> streamSizes, Lis
}
private void send(int streamId, ByteBuf data, int padding) throws Http2Exception {
- controller.writeData(ctx, streamId, data, padding, false, promise);
+ ChannelFuture future = controller.writeData(ctx, streamId, data, padding, false, promise);
+ assertEquals(future, controller.lastWriteForStream(streamId));
}
private void verifyWrite(int streamId, ByteBuf data, int padding) {
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
index 6491ced9015..ba9f78b79fc 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionRoundtripTest.java
@@ -85,8 +85,9 @@ public class Http2ConnectionRoundtripTest {
private Channel serverChannel;
private Channel clientChannel;
private Http2TestUtil.FrameCountDown serverFrameCountDown;
- private volatile CountDownLatch requestLatch;
- private volatile CountDownLatch dataLatch;
+ private CountDownLatch requestLatch;
+ private CountDownLatch dataLatch;
+ private CountDownLatch trailersLatch;
@Before
public void setup() throws Exception {
@@ -106,7 +107,7 @@ public void teardown() throws Exception {
@Test
public void http2ExceptionInPipelineShouldCloseConnection() throws Exception {
- bootstrapEnv(1, 1);
+ bootstrapEnv(1, 1, 1);
// Create a latch to track when the close occurs.
final CountDownLatch closeLatch = new CountDownLatch(1);
@@ -150,7 +151,7 @@ public void listenerExceptionShouldCloseConnection() throws Exception {
any(ChannelHandlerContext.class), eq(3), eq(headers), eq(0), eq((short) 16),
eq(false), eq(0), eq(false));
- bootstrapEnv(1, 1);
+ bootstrapEnv(1, 1, 1);
// Create a latch to track when the close occurs.
final CountDownLatch closeLatch = new CountDownLatch(1);
@@ -180,7 +181,7 @@ public void run() {
@Test
public void nonHttp2ExceptionInPipelineShouldNotCloseConnection() throws Exception {
- bootstrapEnv(1, 1);
+ bootstrapEnv(1, 1, 1);
// Create a latch to track when the close occurs.
final CountDownLatch closeLatch = new CountDownLatch(1);
@@ -219,7 +220,7 @@ public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
@Test
public void noMoreStreamIdsShouldSendGoAway() throws Exception {
- bootstrapEnv(1, 3);
+ bootstrapEnv(1, 3, 1);
// Create a single stream by sending a HEADERS frame to the server.
final Http2Headers headers = dummyHeaders();
@@ -262,7 +263,7 @@ public Void answer(InvocationOnMock in) throws Throwable {
any(ByteBuf.class), eq(0), Mockito.anyBoolean());
try {
// Initialize the data latch based on the number of bytes expected.
- bootstrapEnv(length, 2);
+ bootstrapEnv(length, 2, 1);
// Create the stream and send all of the data at once.
runInChannel(clientChannel, new Http2Runnable() {
@@ -270,20 +271,25 @@ public Void answer(InvocationOnMock in) throws Throwable {
public void run() {
http2Client.encoder().writeHeaders(ctx(), 3, headers, 0, (short) 16, false, 0,
false, newPromise());
- http2Client.encoder().writeData(ctx(), 3, data.retain(), 0, true, newPromise());
+ http2Client.encoder().writeData(ctx(), 3, data.retain(), 0, false, newPromise());
+
+ // Write trailers.
+ http2Client.encoder().writeHeaders(ctx(), 3, headers, 0, (short) 16, false, 0,
+ true, newPromise());
}
});
- // Wait for all DATA frames to be received at the server.
- assertTrue(dataLatch.await(5, TimeUnit.SECONDS));
+ // Wait for the trailers to be received.
+ assertTrue(trailersLatch.await(5, TimeUnit.SECONDS));
- // Verify that headers were received and only one DATA frame was received with endStream set.
+ // Verify that headers and trailers were received.
verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(headers), eq(0),
eq((short) 16), eq(false), eq(0), eq(false));
- verify(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3), any(ByteBuf.class), eq(0),
- eq(true));
+ verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(headers), eq(0),
+ eq((short) 16), eq(false), eq(0), eq(true));
// Verify we received all the bytes.
+ assertEquals(0, dataLatch.getCount());
out.flush();
byte[] received = out.toByteArray();
assertArrayEquals(bytes, received);
@@ -317,9 +323,9 @@ public Void answer(InvocationOnMock in) throws Throwable {
return null;
}
}).when(serverListener).onDataRead(any(ChannelHandlerContext.class), anyInt(), eq(data),
- eq(0), eq(true));
+ eq(0), eq(false));
try {
- bootstrapEnv(numStreams * text.length(), numStreams * 3);
+ bootstrapEnv(numStreams * text.length(), numStreams * 4, numStreams);
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -329,18 +335,23 @@ public void run() {
http2Client.encoder().writePing(ctx(), false, pingData.slice().retain(),
newPromise());
http2Client.encoder().writeData(ctx(), nextStream, data.slice().retain(),
- 0, true, newPromise());
+ 0, false, newPromise());
+ // Write trailers.
+ http2Client.encoder().writeHeaders(ctx(), nextStream, headers, 0,
+ (short) 16, false, 0, true, newPromise());
}
}
});
// Wait for all frames to be received.
- assertTrue(requestLatch.await(30, SECONDS));
+ assertTrue(trailersLatch.await(30, SECONDS));
verify(serverListener, times(numStreams)).onHeadersRead(any(ChannelHandlerContext.class), anyInt(),
eq(headers), eq(0), eq((short) 16), eq(false), eq(0), eq(false));
+ verify(serverListener, times(numStreams)).onHeadersRead(any(ChannelHandlerContext.class), anyInt(),
+ eq(headers), eq(0), eq((short) 16), eq(false), eq(0), eq(true));
verify(serverListener, times(numStreams)).onPingRead(any(ChannelHandlerContext.class),
any(ByteBuf.class));
verify(serverListener, times(numStreams)).onDataRead(any(ChannelHandlerContext.class),
- anyInt(), any(ByteBuf.class), eq(0), eq(true));
+ anyInt(), any(ByteBuf.class), eq(0), eq(false));
assertEquals(numStreams, receivedPingBuffers.size());
assertEquals(numStreams, receivedDataBuffers.size());
for (String receivedData : receivedDataBuffers) {
@@ -355,9 +366,10 @@ public void run() {
}
}
- private void bootstrapEnv(int dataCountDown, int requestCountDown) throws Exception {
+ private void bootstrapEnv(int dataCountDown, int requestCountDown, int trailersCountDown) throws Exception {
requestLatch = new CountDownLatch(requestCountDown);
dataLatch = new CountDownLatch(dataCountDown);
+ trailersLatch = new CountDownLatch(trailersCountDown);
sb = new ServerBootstrap();
cb = new Bootstrap();
@@ -367,7 +379,9 @@ private void bootstrapEnv(int dataCountDown, int requestCountDown) throws Except
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
- serverFrameCountDown = new Http2TestUtil.FrameCountDown(serverListener, requestLatch, dataLatch);
+ serverFrameCountDown =
+ new Http2TestUtil.FrameCountDown(serverListener, requestLatch, dataLatch,
+ trailersLatch);
p.addLast(new Http2ConnectionHandler(true, serverFrameCountDown));
p.addLast(Http2CodecUtil.ignoreSettingsHandler());
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java
index c4f33e00a3b..17fc8adf547 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2TestUtil.java
@@ -262,15 +262,22 @@ static class FrameCountDown implements Http2FrameListener {
private final Http2FrameListener listener;
private final CountDownLatch messageLatch;
private final CountDownLatch dataLatch;
+ private final CountDownLatch trailersLatch;
public FrameCountDown(Http2FrameListener listener, CountDownLatch messageLatch) {
this(listener, messageLatch, null);
}
public FrameCountDown(Http2FrameListener listener, CountDownLatch messageLatch, CountDownLatch dataLatch) {
+ this(listener, messageLatch, dataLatch, null);
+ }
+
+ public FrameCountDown(Http2FrameListener listener, CountDownLatch messageLatch,
+ CountDownLatch dataLatch, CountDownLatch trailersLatch) {
this.listener = listener;
this.messageLatch = messageLatch;
this.dataLatch = dataLatch;
+ this.trailersLatch = trailersLatch;
}
@Override
@@ -291,6 +298,9 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
boolean endStream) throws Http2Exception {
listener.onHeadersRead(ctx, streamId, headers, padding, endStream);
messageLatch.countDown();
+ if (trailersLatch != null && endStream) {
+ trailersLatch.countDown();
+ }
}
@Override
@@ -298,6 +308,9 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
short weight, boolean exclusive, int padding, boolean endStream) throws Http2Exception {
listener.onHeadersRead(ctx, streamId, headers, streamDependency, weight, exclusive, padding, endStream);
messageLatch.countDown();
+ if (trailersLatch != null && endStream) {
+ trailersLatch.countDown();
+ }
}
@Override
| train | train | 2014-10-31T21:37:05 | 2014-10-31T15:59:45Z | nmittler | val |
netty/netty/2910_3090 | netty/netty | netty/netty/2910 | netty/netty/3090 | [
"timestamp(timedelta=45.0, similarity=0.8662422963744304)"
] | d5042baf58293db8429ef0c42137edc66c266020 | 9a0dcf8ef6cc8fd6e2a7bc1690b10c728d817243 | [
"`DefaultHttpHeaders` already implements the ascii checks (if the `validate` mode is turned on...which is exposed in the HTTP2 translation layer). I think all that needs to be done here is catch the exception that is thrown by this validation (`IllegalArgumentException`) when creating the HTTP headers object and r... | [
"Maybe you could use this utility method to help out here: https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java#L134\n",
"Good idea...done.\n"
] | 2014-11-01T20:43:20Z | [
"defect"
] | HTTP/2 binary headers requires additional validation for HTTP translation layer | As a result of issue #2786 the HTTP translation layer should be doing additional validation.
From @nmittler
> Just a note on the impact of this on the translation to HTTP/1.1 ... Section 10.3 of the spec touches on this: http://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-10.3
> So it sounds like requests with non-ascii headers should probably be rejected by the translation layer (i.e. RST_STREAM). Sound reasonable?
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java
index 7572fe46ebd..0dddd8b881f 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpUtil.java
@@ -14,6 +14,7 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.handler.codec.AsciiString;
import io.netty.handler.codec.BinaryHeaders;
import io.netty.handler.codec.TextHeaders.EntryVisitor;
@@ -210,8 +211,12 @@ public static FullHttpRequest toHttpRequest(int streamId, Http2Headers http2Head
throws Http2Exception {
// HTTP/2 does not define a way to carry the version identifier that is
// included in the HTTP/1.1 request line.
- FullHttpRequest msg = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.valueOf(http2Headers.method()
- .toString()), http2Headers.path().toString(), validateHttpHeaders);
+ final AsciiString method = checkNotNull(http2Headers.method(),
+ "method header cannot be null in conversion to HTTP/1.x");
+ final AsciiString path = checkNotNull(http2Headers.path(),
+ "path header cannot be null in conversion to HTTP/1.x");
+ FullHttpRequest msg = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.valueOf(method
+ .toString()), path.toString(), validateHttpHeaders);
addHttp2ToHttpHeaders(streamId, http2Headers, msg, false);
return msg;
}
@@ -235,7 +240,8 @@ public static void addHttp2ToHttpHeaders(int streamId, Http2Headers sourceHeader
} catch (Http2Exception ex) {
throw ex;
} catch (Exception ex) {
- PlatformDependent.throwException(ex);
+ throw new Http2StreamException(streamId, Http2Error.PROTOCOL_ERROR,
+ "HTTP/2 to HTTP/1.x headers conversion error", ex);
}
headers.remove(HttpHeaderNames.TRANSFER_ENCODING);
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
index faa037269d1..7c0650e4e37 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
@@ -14,12 +14,22 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.handler.codec.http2.Http2TestUtil.as;
+import static io.netty.handler.codec.http2.Http2TestUtil.runInChannel;
+import static java.util.concurrent.TimeUnit.MILLISECONDS;
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.reset;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
@@ -28,6 +38,7 @@
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
+import io.netty.handler.codec.AsciiString;
import io.netty.handler.codec.http.DefaultFullHttpRequest;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpMessage;
@@ -39,8 +50,16 @@
import io.netty.handler.codec.http.HttpObject;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpVersion;
+import io.netty.handler.codec.http2.Http2TestUtil.FrameAdapter;
+import io.netty.handler.codec.http2.Http2TestUtil.Http2Runnable;
+import io.netty.util.CharsetUtil;
import io.netty.util.NetUtil;
import io.netty.util.concurrent.Future;
+
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -48,15 +67,6 @@
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
-import java.net.InetSocketAddress;
-import java.util.List;
-import java.util.concurrent.CountDownLatch;
-
-import static io.netty.handler.codec.http2.Http2TestUtil.*;
-import static java.util.concurrent.TimeUnit.*;
-import static org.junit.Assert.*;
-import static org.mockito.Mockito.*;
-
/**
* Testing the {@link InboundHttp2ToHttpPriorityAdapter} and base class {@link InboundHttp2ToHttpAdapter} for HTTP/2
* frames into {@link HttpObject}s
@@ -82,6 +92,7 @@ public class InboundHttp2ToHttpAdapterTest {
private int maxContentLength;
private HttpResponseDelegator serverDelegator;
private HttpResponseDelegator clientDelegator;
+ private Http2Exception serverException;
@Before
public void setup() throws Exception {
@@ -112,6 +123,20 @@ protected void initChannel(Channel ch) throws Exception {
serverDelegator = new HttpResponseDelegator(serverListener, serverLatch);
p.addLast(serverDelegator);
serverConnectedChannel = ch;
+ p.addLast(new ChannelHandlerAdapter() {
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ if (cause instanceof Http2Exception) {
+ serverException = (Http2Exception) cause;
+ serverLatch.countDown();
+ } else if (cause != null && cause.getCause() instanceof Http2Exception) {
+ serverException = (Http2Exception) cause.getCause();
+ serverLatch.countDown();
+ } else {
+ super.exceptionCaught(ctx, cause);
+ }
+ }
+ });
}
});
@@ -167,10 +192,8 @@ public void clientRequestSingleHeaderNoDataFrames() throws Exception {
httpHeaders.set(HttpUtil.ExtensionHeaderNames.AUTHORITY.text(), "example.org");
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, 0);
- final Http2Headers http2Headers =
- new DefaultHttp2Headers().method(as("GET")).scheme(as("https"))
- .authority(as("example.org"))
- .path(as("/some/path/resource2"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET")).scheme(as("https"))
+ .authority(as("example.org")).path(as("/some/path/resource2"));
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -184,10 +207,30 @@ public void run() {
capturedRequests = requestCaptor.getAllValues();
assertEquals(request, capturedRequests.get(0));
} finally {
- request.release();
+ request.release();
}
}
+ @Test
+ public void clientRequestSingleHeaderNonAsciiShouldThrow() throws Exception {
+ final Http2Headers http2Headers = new DefaultHttp2Headers()
+ .method(as("GET"))
+ .scheme(as("https"))
+ .authority(as("example.org"))
+ .path(as("/some/path/resource2"))
+ .add(new AsciiString("çã".getBytes(CharsetUtil.UTF_8)),
+ new AsciiString("Ãã".getBytes(CharsetUtil.UTF_8)));
+ runInChannel(clientChannel, new Http2Runnable() {
+ @Override
+ public void run() {
+ frameWriter.writeHeaders(ctxClient(), 3, http2Headers, 0, true, newPromiseClient());
+ ctxClient().flush();
+ }
+ });
+ awaitRequests();
+ assertTrue(serverException instanceof Http2StreamException);
+ }
+
@Test
public void clientRequestOneDataFrame() throws Exception {
final String text = "hello world";
@@ -198,8 +241,8 @@ public void clientRequestOneDataFrame() throws Exception {
HttpHeaders httpHeaders = request.headers();
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length());
- final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET"))
- .path(as("/some/path/resource2"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET")).path(
+ as("/some/path/resource2"));
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -228,17 +271,17 @@ public void clientRequestMultipleDataFrames() throws Exception {
HttpHeaders httpHeaders = request.headers();
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length());
- final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET"))
- .path(as("/some/path/resource2"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET")).path(
+ as("/some/path/resource2"));
final int midPoint = text.length() / 2;
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
frameWriter.writeHeaders(ctxClient(), 3, http2Headers, 0, false, newPromiseClient());
- frameWriter
- .writeData(ctxClient(), 3, content.slice(0, midPoint).retain(), 0, false, newPromiseClient());
- frameWriter.writeData(ctxClient(), 3, content.slice(midPoint, text.length() - midPoint).retain(), 0,
- true, newPromiseClient());
+ frameWriter.writeData(ctxClient(), 3, content.slice(0, midPoint).retain(), 0, false,
+ newPromiseClient());
+ frameWriter.writeData(ctxClient(), 3, content.slice(midPoint, text.length() - midPoint).retain(),
+ 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -262,8 +305,8 @@ public void clientRequestMultipleEmptyDataFrames() throws Exception {
HttpHeaders httpHeaders = request.headers();
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, text.length());
- final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET"))
- .path(as("/some/path/resource2"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET")).path(
+ as("/some/path/resource2"));
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -300,13 +343,10 @@ public void clientRequestMultipleHeaders() throws Exception {
trailingHeaders.set("FoO", "goo");
trailingHeaders.set("foO2", "goo2");
trailingHeaders.add("fOo2", "goo3");
- final Http2Headers http2Headers =
- new DefaultHttp2Headers().method(as("GET")).path(
- as("/some/path/resource2"));
- final Http2Headers http2Headers2 =
- new DefaultHttp2Headers().set(as("foo"), as("goo"))
- .set(as("foo2"), as("goo2"))
- .add(as("foo2"), as("goo3"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET")).path(
+ as("/some/path/resource2"));
+ final Http2Headers http2Headers2 = new DefaultHttp2Headers().set(as("foo"), as("goo"))
+ .set(as("foo2"), as("goo2")).add(as("foo2"), as("goo3"));
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -340,13 +380,10 @@ public void clientRequestTrailingHeaders() throws Exception {
trailingHeaders.set("Foo", "goo");
trailingHeaders.set("fOo2", "goo2");
trailingHeaders.add("foO2", "goo3");
- final Http2Headers http2Headers =
- new DefaultHttp2Headers().method(as("GET")).path(
- as("/some/path/resource2"));
- final Http2Headers http2Headers2 =
- new DefaultHttp2Headers().set(as("foo"), as("goo"))
- .set(as("foo2"), as("goo2"))
- .add(as("foo2"), as("goo3"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("GET")).path(
+ as("/some/path/resource2"));
+ final Http2Headers http2Headers2 = new DefaultHttp2Headers().set(as("foo"), as("goo"))
+ .set(as("foo2"), as("goo2")).add(as("foo2"), as("goo3"));
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -386,10 +423,10 @@ public void clientRequestStreamDependencyInHttpMessageFlow() throws Exception {
httpHeaders2.setInt(HttpUtil.ExtensionHeaderNames.STREAM_DEPENDENCY_ID.text(), 3);
httpHeaders2.setInt(HttpUtil.ExtensionHeaderNames.STREAM_WEIGHT.text(), 123);
httpHeaders2.setInt(HttpHeaderNames.CONTENT_LENGTH, text2.length());
- final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("PUT"))
- .path(as("/some/path/resource"));
- final Http2Headers http2Headers2 = new DefaultHttp2Headers().method(as("PUT"))
- .path(as("/some/path/resource2"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("PUT")).path(
+ as("/some/path/resource"));
+ final Http2Headers http2Headers2 = new DefaultHttp2Headers().method(as("PUT")).path(
+ as("/some/path/resource2"));
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -433,10 +470,10 @@ public void clientRequestStreamDependencyOutsideHttpMessageFlow() throws Excepti
HttpHeaders httpHeaders2 = request2.headers();
httpHeaders2.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 5);
httpHeaders2.setInt(HttpHeaderNames.CONTENT_LENGTH, text2.length());
- final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("PUT"))
- .path(as("/some/path/resource"));
- final Http2Headers http2Headers2 = new DefaultHttp2Headers().method(as("PUT"))
- .path(as("/some/path/resource2"));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("PUT")).path(
+ as("/some/path/resource"));
+ final Http2Headers http2Headers2 = new DefaultHttp2Headers().method(as("PUT")).path(
+ as("/some/path/resource2"));
HttpHeaders httpHeaders3 = request3.headers();
httpHeaders3.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 5);
httpHeaders3.setInt(HttpUtil.ExtensionHeaderNames.STREAM_DEPENDENCY_ID.text(), 3);
@@ -478,8 +515,8 @@ public void serverRequestPushPromise() throws Exception {
content, true);
final FullHttpMessage response2 = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CREATED,
content2, true);
- final FullHttpMessage request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1,
- HttpMethod.GET, "/push/test", true);
+ final FullHttpMessage request = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/push/test",
+ true);
try {
HttpHeaders httpHeaders = response.headers();
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
@@ -494,8 +531,7 @@ public void serverRequestPushPromise() throws Exception {
httpHeaders = request.headers();
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, 0);
- final Http2Headers http2Headers3 = new DefaultHttp2Headers().method(as("GET"))
- .path(as("/push/test"));
+ final Http2Headers http2Headers3 = new DefaultHttp2Headers().method(as("GET")).path(as("/push/test"));
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
@@ -510,9 +546,8 @@ public void run() {
assertEquals(request, capturedRequests.get(0));
final Http2Headers http2Headers = new DefaultHttp2Headers().status(as("200"));
- final Http2Headers http2Headers2 =
- new DefaultHttp2Headers().status(as("201")).scheme(as("https"))
- .authority(as("example.org"));
+ final Http2Headers http2Headers2 = new DefaultHttp2Headers().status(as("201")).scheme(as("https"))
+ .authority(as("example.org"));
runInChannel(serverConnectedChannel, new Http2Runnable() {
@Override
public void run() {
@@ -544,11 +579,8 @@ public void serverResponseHeaderInformational() throws Exception {
httpHeaders.setInt(HttpUtil.ExtensionHeaderNames.STREAM_ID.text(), 3);
httpHeaders.set(HttpHeaderNames.EXPECT, HttpHeaderValues.CONTINUE);
httpHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, 0);
- final Http2Headers http2Headers =
- new DefaultHttp2Headers()
- .method(as("PUT"))
- .path(as("/info/test"))
- .set(as(HttpHeaderNames.EXPECT.toString()), as(HttpHeaderValues.CONTINUE.toString()));
+ final Http2Headers http2Headers = new DefaultHttp2Headers().method(as("PUT")).path(as("/info/test"))
+ .set(as(HttpHeaderNames.EXPECT.toString()), as(HttpHeaderValues.CONTINUE.toString()));
final FullHttpMessage response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE);
final String text = "a big payload";
final ByteBuf payload = Unpooled.copiedBuffer(text.getBytes());
| train | train | 2014-10-31T21:37:05 | 2014-09-19T02:11:06Z | Scottmitch | val |
netty/netty/2875_3106 | netty/netty | netty/netty/2875 | netty/netty/3106 | [
"timestamp(timedelta=12.0, similarity=0.9442379339087925)"
] | b33901c5a698e373ba6aa73abfe870bcb3f56629 | 3da3f527f9589e7aa4589084e55e99a415232fcb | [
"Note that https://github.com/netty/netty/pull/2907 was introduced to fix bugs in the original decompression code.\n",
"@nmittler - So I have looked at this briefly and have come up with the following options:\n1. Create a class which inherits from `DefaultHttp2OutboundFlowController` that overrides `writeData()`... | [
"@trustin - I think these got moved during the header back port (and then forward port). I think it is better to keep a common definition in the HttpHeaderValue class rather than an independent definition in http/2 and http codec (and potentially spdy). If you agree I will back port the portion related to this ch... | 2014-11-06T04:07:11Z | [
"feature"
] | HTTP/2 data write compression support | The HTTP/2 codec currently does not support compression on data writes. There should be an interface which facilitates compression in the native HTTP/2 codec and also supported by the HTTP/1.x translation layer.
After https://github.com/netty/netty/issues/2793 is resolved their will be an opportunity to follow existing practices and possibly extract out commonalities (or leverage unit tests) between the two features.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecompressor.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Deleg... | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecompressor.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/h... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java"
] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecompressor.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecompressor.java
index 45344677cd0..5fea256a4c0 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecompressor.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecompressor.java
@@ -15,8 +15,11 @@
*/
package io.netty.handler.codec.http;
+import static io.netty.handler.codec.http.HttpHeaderValues.DEFLATE;
+import static io.netty.handler.codec.http.HttpHeaderValues.GZIP;
+import static io.netty.handler.codec.http.HttpHeaderValues.X_DEFLATE;
+import static io.netty.handler.codec.http.HttpHeaderValues.X_GZIP;
import io.netty.channel.embedded.EmbeddedChannel;
-import io.netty.handler.codec.AsciiString;
import io.netty.handler.codec.compression.ZlibCodecFactory;
import io.netty.handler.codec.compression.ZlibWrapper;
@@ -26,16 +29,6 @@
* handler modifies the message, please refer to {@link HttpContentDecoder}.
*/
public class HttpContentDecompressor extends HttpContentDecoder {
-
- /**
- * {@code "x-deflate"}
- */
- private static final AsciiString X_DEFLATE = new AsciiString("x-deflate");
- /**
- * {@code "x-gzip"}
- */
- private static final AsciiString X_GZIP = new AsciiString("x-gzip");
-
private final boolean strict;
/**
@@ -57,11 +50,11 @@ public HttpContentDecompressor(boolean strict) {
@Override
protected EmbeddedChannel newContentDecoder(String contentEncoding) throws Exception {
- if (HttpHeaderValues.GZIP.equalsIgnoreCase(contentEncoding) ||
+ if (GZIP.equalsIgnoreCase(contentEncoding) ||
X_GZIP.equalsIgnoreCase(contentEncoding)) {
return new EmbeddedChannel(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP));
}
- if (HttpHeaderValues.DEFLATE.equalsIgnoreCase(contentEncoding) ||
+ if (DEFLATE.equalsIgnoreCase(contentEncoding) ||
X_DEFLATE.equalsIgnoreCase(contentEncoding)) {
final ZlibWrapper wrapper = strict ? ZlibWrapper.ZLIB : ZlibWrapper.ZLIB_OR_NONE;
// To be strict, 'deflate' means ZLIB, but some servers were not implemented correctly.
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java
index ea907cba2bd..ce4534337b5 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpHeaderValues.java
@@ -76,6 +76,10 @@ public final class HttpHeaderValues {
* {@code "deflate"}
*/
public static final AsciiString DEFLATE = new AsciiString("deflate");
+ /**
+ * {@code "x-deflate"}
+ */
+ public static final AsciiString X_DEFLATE = new AsciiString("x-deflate");
/**
* {@code "file"}
* See {@link HttpHeaderNames#CONTENT_DISPOSITION}
@@ -95,6 +99,10 @@ public final class HttpHeaderValues {
* {@code "gzip"}
*/
public static final AsciiString GZIP = new AsciiString("gzip");
+ /**
+ * {@code "x-gzip"}
+ */
+ public static final AsciiString X_GZIP = new AsciiString("x-gzip");
/**
* {@code "identity"}
*/
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
new file mode 100644
index 00000000000..b237c86aa2e
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
@@ -0,0 +1,289 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_ENCODING;
+import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_LENGTH;
+import static io.netty.handler.codec.http.HttpHeaderValues.DEFLATE;
+import static io.netty.handler.codec.http.HttpHeaderValues.GZIP;
+import static io.netty.handler.codec.http.HttpHeaderValues.IDENTITY;
+import static io.netty.handler.codec.http.HttpHeaderValues.X_DEFLATE;
+import static io.netty.handler.codec.http.HttpHeaderValues.X_GZIP;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPromise;
+import io.netty.channel.ChannelPromiseAggregator;
+import io.netty.channel.embedded.EmbeddedChannel;
+import io.netty.handler.codec.AsciiString;
+import io.netty.handler.codec.ByteToMessageDecoder;
+import io.netty.handler.codec.compression.ZlibCodecFactory;
+import io.netty.handler.codec.compression.ZlibWrapper;
+
+/**
+ * A HTTP2 encoder that will compress data frames according to the {@code content-encoding} header for each stream.
+ * The compression provided by this class will be applied to the data for the entire stream.
+ */
+public class CompressorHttp2ConnectionEncoder extends DefaultHttp2ConnectionEncoder {
+ private static final Http2ConnectionAdapter CLEAN_UP_LISTENER = new Http2ConnectionAdapter() {
+ @Override
+ public void streamRemoved(Http2Stream stream) {
+ final EmbeddedChannel compressor = stream.compressor();
+ if (compressor != null) {
+ cleanup(stream, compressor);
+ }
+ }
+ };
+
+ private final int compressionLevel;
+ private final int windowBits;
+ private final int memLevel;
+
+ /**
+ * Builder for new instances of {@link CompressorHttp2ConnectionEncoder}
+ */
+ public static class Builder extends DefaultHttp2ConnectionEncoder.Builder {
+ protected int compressionLevel = 6;
+ protected int windowBits = 15;
+ protected int memLevel = 8;
+
+ public Builder compressionLevel(int compressionLevel) {
+ this.compressionLevel = compressionLevel;
+ return this;
+ }
+
+ public Builder windowBits(int windowBits) {
+ this.windowBits = windowBits;
+ return this;
+ }
+
+ public Builder memLevel(int memLevel) {
+ this.memLevel = memLevel;
+ return this;
+ }
+
+ @Override
+ public CompressorHttp2ConnectionEncoder build() {
+ return new CompressorHttp2ConnectionEncoder(this);
+ }
+ }
+
+ protected CompressorHttp2ConnectionEncoder(Builder builder) {
+ super(builder);
+ if (builder.compressionLevel < 0 || builder.compressionLevel > 9) {
+ throw new IllegalArgumentException("compressionLevel: " + builder.compressionLevel + " (expected: 0-9)");
+ }
+ if (builder.windowBits < 9 || builder.windowBits > 15) {
+ throw new IllegalArgumentException("windowBits: " + builder.windowBits + " (expected: 9-15)");
+ }
+ if (builder.memLevel < 1 || builder.memLevel > 9) {
+ throw new IllegalArgumentException("memLevel: " + builder.memLevel + " (expected: 1-9)");
+ }
+ this.compressionLevel = builder.compressionLevel;
+ this.windowBits = builder.windowBits;
+ this.memLevel = builder.memLevel;
+
+ connection().addListener(CLEAN_UP_LISTENER);
+ }
+
+ @Override
+ public ChannelFuture writeData(final ChannelHandlerContext ctx, final int streamId, ByteBuf data, int padding,
+ final boolean endOfStream, ChannelPromise promise) {
+ final Http2Stream stream = connection().stream(streamId);
+ final EmbeddedChannel compressor = stream == null ? null : stream.compressor();
+ if (compressor == null) {
+ // The compressor may be null if no compatible encoding type was found in this stream's headers
+ return super.writeData(ctx, streamId, data, padding, endOfStream, promise);
+ }
+
+ try {
+ // call retain here as it will call release after its written to the channel
+ compressor.writeOutbound(data.retain());
+ ByteBuf buf = nextReadableBuf(compressor);
+ if (buf == null) {
+ if (endOfStream) {
+ return super.writeData(ctx, streamId, Unpooled.EMPTY_BUFFER, padding, endOfStream, promise);
+ }
+ // END_STREAM is not set and the assumption is data is still forthcoming.
+ promise.setSuccess();
+ return promise;
+ }
+
+ ChannelPromiseAggregator aggregator = new ChannelPromiseAggregator(promise);
+ for (;;) {
+ final ByteBuf nextBuf = nextReadableBuf(compressor);
+ final boolean endOfStreamForBuf = nextBuf == null ? endOfStream : false;
+ ChannelPromise newPromise = ctx.newPromise();
+ aggregator.add(newPromise);
+
+ super.writeData(ctx, streamId, buf, padding, endOfStreamForBuf, newPromise);
+ if (nextBuf == null) {
+ break;
+ }
+
+ buf = nextBuf;
+ }
+ return promise;
+ } finally {
+ if (endOfStream) {
+ cleanup(stream, compressor);
+ }
+ }
+ }
+
+ @Override
+ public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int padding,
+ boolean endStream, ChannelPromise promise) {
+ initCompressor(streamId, headers, endStream);
+ return super.writeHeaders(ctx, streamId, headers, padding, endStream, promise);
+ }
+
+ @Override
+ public ChannelFuture writeHeaders(final ChannelHandlerContext ctx, final int streamId, final Http2Headers headers,
+ final int streamDependency, final short weight, final boolean exclusive, final int padding,
+ final boolean endOfStream, final ChannelPromise promise) {
+ initCompressor(streamId, headers, endOfStream);
+ return super.writeHeaders(ctx, streamId, headers, streamDependency, weight, exclusive, padding, endOfStream,
+ promise);
+ }
+
+ /**
+ * Returns a new {@link EmbeddedChannel} that encodes the HTTP2 message content encoded in the specified
+ * {@code contentEncoding}.
+ *
+ * @param contentEncoding the value of the {@code content-encoding} header
+ * @return a new {@link ByteToMessageDecoder} if the specified encoding is supported. {@code null} otherwise
+ * (alternatively, you can throw a {@link Http2Exception} to block unknown encoding).
+ * @throws Http2Exception If the specified encoding is not not supported and warrants an exception
+ */
+ protected EmbeddedChannel newContentCompressor(AsciiString contentEncoding) throws Http2Exception {
+ if (GZIP.equalsIgnoreCase(contentEncoding) || X_GZIP.equalsIgnoreCase(contentEncoding)) {
+ return newCompressionChannel(ZlibWrapper.GZIP);
+ }
+ if (DEFLATE.equalsIgnoreCase(contentEncoding) || X_DEFLATE.equalsIgnoreCase(contentEncoding)) {
+ return newCompressionChannel(ZlibWrapper.ZLIB);
+ }
+ // 'identity' or unsupported
+ return null;
+ }
+
+ /**
+ * Returns the expected content encoding of the decoded content. Returning {@code contentEncoding} is the default
+ * behavior, which is the case for most compressors.
+ *
+ * @param contentEncoding the value of the {@code content-encoding} header
+ * @return the expected content encoding of the new content.
+ * @throws Http2Exception if the {@code contentEncoding} is not supported and warrants an exception
+ */
+ protected AsciiString getTargetContentEncoding(AsciiString contentEncoding) throws Http2Exception {
+ return contentEncoding;
+ }
+
+ /**
+ * Generate a new instance of an {@link EmbeddedChannel} capable of compressing data
+ * @param wrapper Defines what type of encoder should be used
+ */
+ private EmbeddedChannel newCompressionChannel(ZlibWrapper wrapper) {
+ return new EmbeddedChannel(ZlibCodecFactory.newZlibEncoder(wrapper, compressionLevel, windowBits,
+ memLevel));
+ }
+
+ /**
+ * Checks if a new compressor object is needed for the stream identified by {@code streamId}. This method will
+ * modify the {@code content-encoding} header contained in {@code headers}.
+ *
+ * @param streamId The identifier for the headers inside {@code headers}
+ * @param headers Object representing headers which are to be written
+ * @param endOfStream Indicates if the stream has ended
+ */
+ private void initCompressor(int streamId, Http2Headers headers, boolean endOfStream) {
+ final Http2Stream stream = connection().stream(streamId);
+ if (stream == null) {
+ return;
+ }
+
+ EmbeddedChannel compressor = stream.compressor();
+ if (compressor == null) {
+ if (!endOfStream) {
+ AsciiString encoding = headers.get(CONTENT_ENCODING);
+ if (encoding == null) {
+ encoding = IDENTITY;
+ }
+ try {
+ compressor = newContentCompressor(encoding);
+ if (compressor != null) {
+ AsciiString targetContentEncoding = getTargetContentEncoding(encoding);
+ if (IDENTITY.equalsIgnoreCase(targetContentEncoding)) {
+ headers.remove(CONTENT_ENCODING);
+ } else {
+ headers.set(CONTENT_ENCODING, targetContentEncoding);
+ }
+ }
+ } catch (Throwable ignored) {
+ // Ignore
+ }
+ }
+ } else if (endOfStream) {
+ cleanup(stream, compressor);
+ }
+
+ if (compressor != null) {
+ // The content length will be for the decompressed data. Since we will compress the data
+ // this content-length will not be correct. Instead of queuing messages or delaying sending
+ // header frames...just remove the content-length header
+ headers.remove(CONTENT_LENGTH);
+ }
+ }
+
+ /**
+ * Release remaining content from {@link EmbeddedChannel} and remove the compressor from the {@link Http2Stream}.
+ *
+ * @param stream The stream for which {@code compressor} is the compressor for
+ * @param decompressor The compressor for {@code stream}
+ */
+ private static void cleanup(Http2Stream stream, EmbeddedChannel compressor) {
+ if (compressor.finish()) {
+ for (;;) {
+ final ByteBuf buf = compressor.readOutbound();
+ if (buf == null) {
+ break;
+ }
+ buf.release();
+ }
+ }
+ stream.compressor(null);
+ }
+
+ /**
+ * Read the next compressed {@link ByteBuf} from the {@link EmbeddedChannel} or {@code null} if one does not exist.
+ *
+ * @param decompressor The channel to read from
+ * @return The next decoded {@link ByteBuf} from the {@link EmbeddedChannel} or {@code null} if one does not exist
+ */
+ private static ByteBuf nextReadableBuf(EmbeddedChannel compressor) {
+ for (;;) {
+ final ByteBuf buf = compressor.readOutbound();
+ if (buf == null) {
+ return null;
+ }
+ if (!buf.isReadable()) {
+ buf.release();
+ continue;
+ }
+ return buf;
+ }
+ }
+}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 635d755dbc1..f78b35cefa5 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -221,6 +221,7 @@ private class DefaultStream implements Http2Stream {
private FlowState inboundFlow;
private FlowState outboundFlow;
private EmbeddedChannel decompressor;
+ private EmbeddedChannel compressor;
private Object data;
DefaultStream(int id) {
@@ -310,6 +311,19 @@ public EmbeddedChannel decompressor() {
return decompressor;
}
+ @Override
+ public void compressor(EmbeddedChannel compressor) {
+ if (this.compressor != null && compressor != null) {
+ throw new IllegalStateException("compressor can not be reassigned");
+ }
+ this.compressor = compressor;
+ }
+
+ @Override
+ public EmbeddedChannel compressor() {
+ return compressor;
+ }
+
@Override
public FlowState inboundFlow() {
return inboundFlow;
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
index bdb0c4a0194..88b94885e06 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
@@ -14,6 +14,13 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_ENCODING;
+import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_LENGTH;
+import static io.netty.handler.codec.http.HttpHeaderValues.DEFLATE;
+import static io.netty.handler.codec.http.HttpHeaderValues.GZIP;
+import static io.netty.handler.codec.http.HttpHeaderValues.IDENTITY;
+import static io.netty.handler.codec.http.HttpHeaderValues.X_DEFLATE;
+import static io.netty.handler.codec.http.HttpHeaderValues.X_GZIP;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
@@ -22,24 +29,12 @@
import io.netty.handler.codec.ByteToMessageDecoder;
import io.netty.handler.codec.compression.ZlibCodecFactory;
import io.netty.handler.codec.compression.ZlibWrapper;
-import io.netty.handler.codec.http.HttpHeaderNames;
-import io.netty.handler.codec.http.HttpHeaderValues;
/**
* A HTTP2 frame listener that will decompress data frames according to the {@code content-encoding} header for each
- * stream.
+ * stream. The decompression provided by this class will be applied to the data for the entire stream.
*/
public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecorator {
-
- /**
- * {@code "x-deflate"}
- */
- private static final AsciiString X_DEFLATE = new AsciiString("x-deflate");
- /**
- * {@code "x-gzip"}
- */
- private static final AsciiString X_GZIP = new AsciiString("x-gzip");
-
private static final Http2ConnectionAdapter CLEAN_UP_LISTENER = new Http2ConnectionAdapter() {
@Override
public void streamRemoved(Http2Stream stream) {
@@ -72,6 +67,7 @@ public void onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, in
final Http2Stream stream = connection.stream(streamId);
final EmbeddedChannel decompressor = stream == null ? null : stream.decompressor();
if (decompressor == null) {
+ // The decompressor may be null if no compatible encoding type was found in this stream's headers
listener.onDataRead(ctx, streamId, data, padding, endOfStream);
return;
}
@@ -90,12 +86,13 @@ public void onDataRead(ChannelHandlerContext ctx, int streamId, ByteBuf data, in
} else {
for (;;) {
final ByteBuf nextBuf = nextReadableBuf(decompressor);
+ final boolean endOfStreamForBuf = nextBuf == null ? endOfStream : false;
+
+ listener.onDataRead(ctx, streamId, buf, padding, endOfStreamForBuf);
if (nextBuf == null) {
- listener.onDataRead(ctx, streamId, buf, padding, endOfStream);
break;
}
- listener.onDataRead(ctx, streamId, buf, padding, false);
buf = nextBuf;
}
}
@@ -130,11 +127,11 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
* @throws Http2Exception If the specified encoding is not not supported and warrants an exception
*/
protected EmbeddedChannel newContentDecompressor(AsciiString contentEncoding) throws Http2Exception {
- if (HttpHeaderValues.GZIP.equalsIgnoreCase(contentEncoding) ||
+ if (GZIP.equalsIgnoreCase(contentEncoding) ||
X_GZIP.equalsIgnoreCase(contentEncoding)) {
return new EmbeddedChannel(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP));
}
- if (HttpHeaderValues.DEFLATE.equalsIgnoreCase(contentEncoding) ||
+ if (DEFLATE.equalsIgnoreCase(contentEncoding) ||
X_DEFLATE.equalsIgnoreCase(contentEncoding)) {
final ZlibWrapper wrapper = strict ? ZlibWrapper.ZLIB : ZlibWrapper.ZLIB_OR_NONE;
// To be strict, 'deflate' means ZLIB, but some servers were not implemented correctly.
@@ -154,7 +151,7 @@ protected EmbeddedChannel newContentDecompressor(AsciiString contentEncoding) th
*/
protected AsciiString getTargetContentEncoding(@SuppressWarnings("UnusedParameters") AsciiString contentEncoding)
throws Http2Exception {
- return HttpHeaderValues.IDENTITY;
+ return IDENTITY;
}
/**
@@ -176,9 +173,9 @@ private void initDecompressor(int streamId, Http2Headers headers, boolean endOfS
if (decompressor == null) {
if (!endOfStream) {
// Determine the content encoding.
- AsciiString contentEncoding = headers.get(HttpHeaderNames.CONTENT_ENCODING);
+ AsciiString contentEncoding = headers.get(CONTENT_ENCODING);
if (contentEncoding == null) {
- contentEncoding = HttpHeaderValues.IDENTITY;
+ contentEncoding = IDENTITY;
}
decompressor = newContentDecompressor(contentEncoding);
if (decompressor != null) {
@@ -186,21 +183,22 @@ private void initDecompressor(int streamId, Http2Headers headers, boolean endOfS
// Decode the content and remove or replace the existing headers
// so that the message looks like a decoded message.
AsciiString targetContentEncoding = getTargetContentEncoding(contentEncoding);
- if (HttpHeaderValues.IDENTITY.equalsIgnoreCase(targetContentEncoding)) {
- headers.remove(HttpHeaderNames.CONTENT_ENCODING);
+ if (IDENTITY.equalsIgnoreCase(targetContentEncoding)) {
+ headers.remove(CONTENT_ENCODING);
} else {
- headers.set(HttpHeaderNames.CONTENT_ENCODING, targetContentEncoding);
+ headers.set(CONTENT_ENCODING, targetContentEncoding);
}
}
}
} else if (endOfStream) {
cleanup(stream, decompressor);
}
+
if (decompressor != null) {
// The content length will be for the compressed data. Since we will decompress the data
// this content-length will not be correct. Instead of queuing messages or delaying sending
// header frames...just remove the content-length header
- headers.remove(HttpHeaderNames.CONTENT_LENGTH);
+ headers.remove(CONTENT_LENGTH);
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
index cabbdb8f6f5..34f5c3971eb 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
@@ -153,6 +153,16 @@ enum State {
*/
EmbeddedChannel decompressor();
+ /**
+ * Associate an object responsible for compressing data frames for this stream
+ */
+ void compressor(EmbeddedChannel decompressor);
+
+ /**
+ * Get the object capable of compressing data frames for this stream
+ */
+ EmbeddedChannel compressor();
+
/**
* Gets the in-bound flow control state for this stream.
*/
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
index ca5fb29f8b2..876f824eac8 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
@@ -14,6 +14,17 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
+import static io.netty.handler.codec.http2.Http2TestUtil.as;
+import static io.netty.handler.codec.http2.Http2TestUtil.runInChannel;
+import static java.util.concurrent.TimeUnit.MILLISECONDS;
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
@@ -35,8 +46,15 @@
import io.netty.handler.codec.compression.ZlibWrapper;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpHeaderValues;
+import io.netty.handler.codec.http2.Http2TestUtil.FrameAdapter;
+import io.netty.handler.codec.http2.Http2TestUtil.Http2Runnable;
import io.netty.util.NetUtil;
import io.netty.util.concurrent.Future;
+
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -44,15 +62,6 @@
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
-import java.net.InetSocketAddress;
-import java.util.List;
-import java.util.concurrent.CountDownLatch;
-
-import static io.netty.handler.codec.http2.Http2TestUtil.*;
-import static java.util.concurrent.TimeUnit.*;
-import static org.junit.Assert.*;
-import static org.mockito.Mockito.*;
-
/**
* Test for data decompression in the HTTP/2 codec.
*/
@@ -66,9 +75,14 @@ public class DataCompressionHttp2Test {
private Http2FrameListener serverListener;
@Mock
private Http2FrameListener clientListener;
+ @Mock
+ private Http2LifecycleManager serverLifeCycleManager;
+ @Mock
+ private Http2LifecycleManager clientLifeCycleManager;
private ByteBufAllocator alloc;
- private Http2FrameWriter frameWriter;
+ private Http2ConnectionEncoder serverEncoder;
+ private Http2ConnectionEncoder clientEncoder;
private ServerBootstrap sb;
private Bootstrap cb;
private Channel serverChannel;
@@ -107,22 +121,22 @@ public void teardown() throws InterruptedException {
@Test
public void justHeadersNoData() throws Exception {
- bootstrapEnv(2, 1);
- final Http2Headers headers =
- new DefaultHttp2Headers().method(GET).path(PATH)
- .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
+ bootstrapEnv(1, 1);
+ final Http2Headers headers = new DefaultHttp2Headers().method(GET).path(PATH)
+ .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
// Required because the decompressor intercepts the onXXXRead events before
// our {@link Http2TestUtil$FrameAdapter} does.
FrameAdapter.getOrCreateStream(serverConnection, 3, false);
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
- frameWriter.writeHeaders(ctxClient(), 3, headers, 0, true, newPromiseClient());
+ clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, true, newPromiseClient());
ctxClient().flush();
}
});
awaitServer();
- verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(headers), eq(0), eq(true));
+ verify(serverListener).onHeadersRead(any(ChannelHandlerContext.class), eq(3), eq(headers), eq(0),
+ eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(true));
}
@Test
@@ -133,17 +147,16 @@ public void gzipEncodingSingleEmptyMessage() throws Exception {
final EmbeddedChannel encoder = new EmbeddedChannel(ZlibCodecFactory.newZlibEncoder(ZlibWrapper.GZIP));
try {
final ByteBuf encodedData = encodeData(data, encoder);
- final Http2Headers headers =
- new DefaultHttp2Headers().method(POST).path(PATH)
- .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
+ final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH)
+ .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
// Required because the decompressor intercepts the onXXXRead events before
// our {@link Http2TestUtil$FrameAdapter} does.
FrameAdapter.getOrCreateStream(serverConnection, 3, false);
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
- frameWriter.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- frameWriter.writeData(ctxClient(), 3, encodedData, 0, true, newPromiseClient());
+ clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, encodedData, 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -151,7 +164,7 @@ public void run() {
data.resetReaderIndex();
ArgumentCaptor<ByteBuf> dataCaptor = ArgumentCaptor.forClass(ByteBuf.class);
verify(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3), dataCaptor.capture(), eq(0),
- eq(true));
+ eq(true));
dataCapture = dataCaptor.getAllValues();
assertEquals(data, dataCapture.get(0));
} finally {
@@ -168,17 +181,16 @@ public void gzipEncodingSingleMessage() throws Exception {
final EmbeddedChannel encoder = new EmbeddedChannel(ZlibCodecFactory.newZlibEncoder(ZlibWrapper.GZIP));
try {
final ByteBuf encodedData = encodeData(data, encoder);
- final Http2Headers headers =
- new DefaultHttp2Headers().method(POST).path(PATH)
- .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
+ final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH)
+ .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
// Required because the decompressor intercepts the onXXXRead events before
// our {@link Http2TestUtil$FrameAdapter} does.
FrameAdapter.getOrCreateStream(serverConnection, 3, false);
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
- frameWriter.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- frameWriter.writeData(ctxClient(), 3, encodedData, 0, true, newPromiseClient());
+ clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, encodedData, 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -186,7 +198,7 @@ public void run() {
data.resetReaderIndex();
ArgumentCaptor<ByteBuf> dataCaptor = ArgumentCaptor.forClass(ByteBuf.class);
verify(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3), dataCaptor.capture(), eq(0),
- eq(true));
+ eq(true));
dataCapture = dataCaptor.getAllValues();
assertEquals(data, dataCapture.get(0));
} finally {
@@ -206,18 +218,17 @@ public void gzipEncodingMultipleMessages() throws Exception {
try {
final ByteBuf encodedData1 = encodeData(data1, encoder);
final ByteBuf encodedData2 = encodeData(data2, encoder);
- final Http2Headers headers =
- new DefaultHttp2Headers().method(POST).path(PATH)
- .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
+ final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH)
+ .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.GZIP);
// Required because the decompressor intercepts the onXXXRead events before
// our {@link Http2TestUtil$FrameAdapter} does.
FrameAdapter.getOrCreateStream(serverConnection, 3, false);
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
- frameWriter.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- frameWriter.writeData(ctxClient(), 3, encodedData1, 0, false, newPromiseClient());
- frameWriter.writeData(ctxClient(), 3, encodedData2, 0, true, newPromiseClient());
+ clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, encodedData1, 0, false, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, encodedData2, 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -227,7 +238,7 @@ public void run() {
ArgumentCaptor<ByteBuf> dataCaptor = ArgumentCaptor.forClass(ByteBuf.class);
ArgumentCaptor<Boolean> endStreamCaptor = ArgumentCaptor.forClass(Boolean.class);
verify(serverListener, times(2)).onDataRead(any(ChannelHandlerContext.class), eq(3), dataCaptor.capture(),
- eq(0), endStreamCaptor.capture());
+ eq(0), endStreamCaptor.capture());
dataCapture = dataCaptor.getAllValues();
assertEquals(data1, dataCapture.get(0));
assertEquals(data2, dataCapture.get(1));
@@ -252,16 +263,15 @@ public void deflateEncodingSingleLargeMessageReducedWindow() throws Exception {
data.writeByte((byte) 'a');
}
final ByteBuf encodedData = encodeData(data, encoder);
- final Http2Headers headers =
- new DefaultHttp2Headers().method(POST).path(PATH)
- .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.DEFLATE);
+ final Http2Headers headers = new DefaultHttp2Headers().method(POST).path(PATH)
+ .set(HttpHeaderNames.CONTENT_ENCODING, HttpHeaderValues.DEFLATE);
final Http2Settings settings = new Http2Settings();
// Assume the compression operation will reduce the size by at least 10 bytes
settings.initialWindowSize(BUFFER_SIZE - 10);
runInChannel(serverConnectedChannel, new Http2Runnable() {
@Override
public void run() {
- frameWriter.writeSettings(ctxServer(), settings, newPromiseServer());
+ serverEncoder.writeSettings(ctxServer(), settings, newPromiseServer());
ctxServer().flush();
}
});
@@ -273,9 +283,9 @@ public void run() {
runInChannel(clientChannel, new Http2Runnable() {
@Override
public void run() {
- frameWriter.writeSettings(ctxClient(), settings, newPromiseClient());
- frameWriter.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- frameWriter.writeData(ctxClient(), 3, encodedData, 0, true, newPromiseClient());
+ clientEncoder.writeSettings(ctxClient(), settings, newPromiseClient());
+ clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, encodedData, 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -283,7 +293,7 @@ public void run() {
data.resetReaderIndex();
ArgumentCaptor<ByteBuf> dataCaptor = ArgumentCaptor.forClass(ByteBuf.class);
verify(serverListener).onDataRead(any(ChannelHandlerContext.class), eq(3), dataCaptor.capture(), eq(0),
- eq(true));
+ eq(true));
dataCapture = dataCaptor.getAllValues();
assertEquals(data, dataCapture.get(0));
} finally {
@@ -329,7 +339,6 @@ private void bootstrapEnv(int serverCountDown, int clientCountDown) throws Excep
serverLatch = new CountDownLatch(serverCountDown);
clientLatch = new CountDownLatch(clientCountDown);
- frameWriter = new DefaultHttp2FrameWriter();
serverConnection = new DefaultHttp2Connection(true);
final CountDownLatch latch = new CountDownLatch(1);
@@ -339,9 +348,13 @@ private void bootstrapEnv(int serverCountDown, int clientCountDown) throws Excep
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
- serverAdapter = new FrameAdapter(serverConnection,
- new DelegatingDecompressorFrameListener(serverConnection, serverListener),
- serverLatch);
+ CompressorHttp2ConnectionEncoder.Builder builder = new CompressorHttp2ConnectionEncoder.Builder();
+ Http2FrameWriter writer = new DefaultHttp2FrameWriter();
+ serverEncoder = builder.connection(serverConnection).frameWriter(writer)
+ .outboundFlow(new DefaultHttp2OutboundFlowController(serverConnection, writer))
+ .lifecycleManager(serverLifeCycleManager).build();
+ serverAdapter = new FrameAdapter(serverConnection, new DelegatingDecompressorFrameListener(
+ serverConnection, serverListener), serverLatch);
p.addLast("reader", serverAdapter);
p.addLast(Http2CodecUtil.ignoreSettingsHandler());
serverConnectedChannel = ch;
@@ -355,7 +368,13 @@ protected void initChannel(Channel ch) throws Exception {
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
- clientAdapter = new FrameAdapter(clientListener, clientLatch);
+ Http2Connection connection = new DefaultHttp2Connection(false);
+ Http2FrameWriter writer = new DefaultHttp2FrameWriter();
+ CompressorHttp2ConnectionEncoder.Builder builder = new CompressorHttp2ConnectionEncoder.Builder();
+ clientEncoder = builder.connection(connection).frameWriter(writer)
+ .outboundFlow(new DefaultHttp2OutboundFlowController(connection, writer))
+ .lifecycleManager(clientLifeCycleManager).build();
+ clientAdapter = new FrameAdapter(connection, clientListener, clientLatch);
p.addLast("reader", clientAdapter);
p.addLast(Http2CodecUtil.ignoreSettingsHandler());
}
| test | train | 2014-11-06T05:32:26 | 2014-09-09T02:46:12Z | Scottmitch | val |
netty/netty/3157_3158 | netty/netty | netty/netty/3157 | netty/netty/3158 | [
"timestamp(timedelta=104.0, similarity=0.8532341873259105)"
] | c29e703275beffc3b9b127d4cbf5eaae8c5acd08 | 71b643b29e845e1c7bc8a60d9a58ac5f086d1983 | [
"@trustin Not trying to be picky or anything, but wouldn't it be better to call the enum `HttpStatus` or `HttpStatusType` or `HttpStatusCategory` or `HttpStatusGroup` instead of `HttpStatusClass`? Ending it with `Class` just feels weird to me. [Some other synonyms of classification/type](http://www.thesaurus.com/b... | [] | 2014-11-20T10:51:38Z | [
"feature"
] | Add the method that tells the class of HTTP status code to HttpResponseStatus | It should be convenient to have an easy way to classify an HttpResponseStatus based on the first digit of the HTTP status code, as defined in the RFC 2616:
- Information 1xx
- Success 2xx
- Redirection 3xx
- Client Error 4xx
- Server Error 5xx
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseStatus.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseStatus.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpStatusClass.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseStatus.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseStatus.java
index 7e4fba3fc98..b5fffee57ae 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseStatus.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseStatus.java
@@ -435,25 +435,7 @@ public static HttpResponseStatus valueOf(int code) {
return NETWORK_AUTHENTICATION_REQUIRED;
}
- final String reasonPhrase;
-
- if (code < 100) {
- reasonPhrase = "Unknown Status";
- } else if (code < 200) {
- reasonPhrase = "Informational";
- } else if (code < 300) {
- reasonPhrase = "Successful";
- } else if (code < 400) {
- reasonPhrase = "Redirection";
- } else if (code < 500) {
- reasonPhrase = "Client Error";
- } else if (code < 600) {
- reasonPhrase = "Server Error";
- } else {
- reasonPhrase = "Unknown Status";
- }
-
- return new HttpResponseStatus(code, reasonPhrase + " (" + code + ')');
+ return new HttpResponseStatus(code);
}
/**
@@ -488,13 +470,20 @@ public static HttpResponseStatus parseLine(CharSequence line) {
private final int code;
private final AsciiString codeAsText;
+ private HttpStatusClass codeClass;
private final String reasonPhrase;
private final byte[] bytes;
/**
- * Creates a new instance with the specified {@code code} and its
- * {@code reasonPhrase}.
+ * Creates a new instance with the specified {@code code} and the auto-generated default reason phrase.
+ */
+ private HttpResponseStatus(int code) {
+ this(code, HttpStatusClass.valueOf(code).defaultReasonPhrase() + " (" + code + ')', false);
+ }
+
+ /**
+ * Creates a new instance with the specified {@code code} and its {@code reasonPhrase}.
*/
public HttpResponseStatus(int code, String reasonPhrase) {
this(code, reasonPhrase, false);
@@ -552,6 +541,17 @@ public String reasonPhrase() {
return reasonPhrase;
}
+ /**
+ * Returns the class of this {@link HttpResponseStatus}
+ */
+ public HttpStatusClass codeClass() {
+ HttpStatusClass type = this.codeClass;
+ if (type == null) {
+ this.codeClass = type = HttpStatusClass.valueOf(code);
+ }
+ return type;
+ }
+
@Override
public int hashCode() {
return code();
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpStatusClass.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpStatusClass.java
new file mode 100644
index 00000000000..62d9092fb4a
--- /dev/null
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpStatusClass.java
@@ -0,0 +1,100 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.netty.handler.codec.http;
+
+import io.netty.handler.codec.AsciiString;
+
+/**
+ * The class of HTTP status.
+ */
+public enum HttpStatusClass {
+ /**
+ * The informational class (1xx)
+ */
+ INFORMATIONAL(100, 200, "Informational"),
+ /**
+ * The success class (2xx)
+ */
+ SUCCESS(200, 300, "Success"),
+ /**
+ * The redirection class (3xx)
+ */
+ REDIRECTION(300, 400, "Redirection"),
+ /**
+ * The client error class (4xx)
+ */
+ CLIENT_ERROR(400, 500, "Client Error"),
+ /**
+ * The server error class (5xx)
+ */
+ SERVER_ERROR(500, 600, "Server Error"),
+ /**
+ * The unknown class
+ */
+ UNKNOWN(0, 0, "Unknown Status") {
+ @Override
+ public boolean contains(int code) {
+ return code < 100 || code >= 600;
+ }
+ };
+
+ /**
+ * Returns the class of the specified HTTP status code.
+ */
+ public static HttpStatusClass valueOf(int code) {
+ if (INFORMATIONAL.contains(code)) {
+ return INFORMATIONAL;
+ }
+ if (SUCCESS.contains(code)) {
+ return SUCCESS;
+ }
+ if (REDIRECTION.contains(code)) {
+ return REDIRECTION;
+ }
+ if (CLIENT_ERROR.contains(code)) {
+ return CLIENT_ERROR;
+ }
+ if (SERVER_ERROR.contains(code)) {
+ return SERVER_ERROR;
+ }
+ return UNKNOWN;
+ }
+
+ private final int min;
+ private final int max;
+ private final AsciiString defaultReasonPhrase;
+
+ HttpStatusClass(int min, int max, String defaultReasonPhrase) {
+ this.min = min;
+ this.max = max;
+ this.defaultReasonPhrase = new AsciiString(defaultReasonPhrase);
+ }
+
+ /**
+ * Returns {@code true} if and only if the specified HTTP status code falls into this class.
+ */
+ public boolean contains(int code) {
+ return code >= min && code < max;
+ }
+
+ /**
+ * Returns the default reason phrase of this HTTP status class.
+ */
+ AsciiString defaultReasonPhrase() {
+ return defaultReasonPhrase;
+ }
+}
| null | test | train | 2014-11-20T12:41:09 | 2014-11-20T10:23:21Z | trustin | val |
netty/netty/3114_3165 | netty/netty | netty/netty/3114 | netty/netty/3165 | [
"timestamp(timedelta=128033.0, similarity=0.8549278665067631)"
] | 908b68da03eb9b5f20ac5b49ae721f8a8d845f14 | 1f706d0c372196631f6daf1f0ea6f450479587b2 | [
"@Scottmitch I created this based on your suggestion in review of #3098. I figured it made sense to assign to you since you've done the work on the HTTP layer.\n",
"@nmittler - Thanks. I don't think this is tied to the HTTP layer but it is related to other code I have done. I'll put something together after yo... | [] | 2014-11-21T00:31:07Z | [
"improvement"
] | Investigate use of HTTP/2 inbound flow control on decompressor listener | Once #3098 is cherry-picked, the DelegatingDecompressorFrameListener will have the ability to participate in application-level flow control. It currently just opts-out by returning all of the bytes as "processed". We should investigate whether or not it's worth while to make use of inbound flow control in this class.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
index 3548609eacb..5adc5e0af51 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
@@ -111,8 +111,8 @@ public ChannelFuture writeData(final ChannelHandlerContext ctx, final int stream
}
try {
- // call retain here as it will call release after its written to the channel
- channel.writeOutbound(data.retain());
+ // The channel will release the buffer after being written
+ channel.writeOutbound(data);
ByteBuf buf = nextReadableBuf(channel);
if (buf == null) {
if (endOfStream) {
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
index 756c2559215..41921516289 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
@@ -150,7 +150,7 @@ public void gzipEncodingSingleEmptyMessage() throws Exception {
@Override
public void run() {
clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- clientEncoder.writeData(ctxClient(), 3, data, 0, true, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, data.retain(), 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -179,7 +179,7 @@ public void gzipEncodingSingleMessage() throws Exception {
@Override
public void run() {
clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- clientEncoder.writeData(ctxClient(), 3, data, 0, true, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, data.retain(), 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -210,8 +210,8 @@ public void gzipEncodingMultipleMessages() throws Exception {
@Override
public void run() {
clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- clientEncoder.writeData(ctxClient(), 3, data1, 0, false, newPromiseClient());
- clientEncoder.writeData(ctxClient(), 3, data2, 0, true, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, data1.retain(), 0, false, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, data2.retain(), 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -258,7 +258,7 @@ public void run() {
public void run() {
clientEncoder.writeSettings(ctxClient(), settings, newPromiseClient());
clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- clientEncoder.writeData(ctxClient(), 3, data, 0, true, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, data.retain(), 0, true, newPromiseClient());
ctxClient().flush();
}
});
@@ -302,7 +302,7 @@ public void run() {
public void run() {
clientEncoder.writeSettings(ctxClient(), settings, newPromiseClient());
clientEncoder.writeHeaders(ctxClient(), 3, headers, 0, false, newPromiseClient());
- clientEncoder.writeData(ctxClient(), 3, data, 0, true, newPromiseClient());
+ clientEncoder.writeData(ctxClient(), 3, data.retain(), 0, true, newPromiseClient());
ctxClient().flush();
}
});
| test | train | 2014-11-21T00:42:18 | 2014-11-08T16:34:48Z | nmittler | val |
netty/netty/3079_3177 | netty/netty | netty/netty/3079 | netty/netty/3177 | [
"timestamp(timedelta=108306.0, similarity=0.8638679492524849)"
] | 198f8fa95e3b8e975e389f1b0c4fea90329b0f85 | 0c3b24bc7547c06d4e3580568f7af831a6d9ff2f | [
"@nmittler - FYI\n",
"@nmittler - I'm going to take a first pass at this...there are no framing changes.\n",
"See PR https://github.com/netty/netty/pull/3167\n",
"Cherry-picked updates.\n"
] | [] | 2014-11-24T23:39:28Z | [
"feature"
] | HTTP/2 draft 15 | Draft 15 is out http://tools.ietf.org/html/draft-ietf-httpbis-http2-15. The impacts need to be investigated and the http/2 codec needs to be updated.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java
index d5e68626aa9..792ebddf38c 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2SecurityUtil.java
@@ -49,9 +49,6 @@ public final class Http2SecurityUtil {
private static final List<String> CIPHERS_JAVA_NO_MOZILLA_INCREASED_SECURITY = Collections.unmodifiableList(Arrays
.asList(
- /* Java 6,7,8 */
- "TLS_ECDHE_ECDSA_WITH_RC4_128_SHA", /* openssl = ECDHE-ECDSA-RC4-SHA */
- "TLS_ECDH_ECDSA_WITH_RC4_128_SHA", /* openssl = ECDH-ECDSA-RC4-SHA */
/* Java 8 */
"TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384", /* openssl = ECDH-ECDSA-AES256-GCM-SHA384 */
"TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384", /* openssl = ECDH-RSA-AES256-GCM-SHA384 */
@@ -64,9 +61,7 @@ public final class Http2SecurityUtil {
private static final List<String> CIPHERS_JAVA_DISABLED_DEFAULT = Collections.unmodifiableList(Arrays.asList(
/* Java 8 */
"TLS_DH_anon_WITH_AES_256_GCM_SHA384", /* openssl = ADH-AES256-GCM-SHA384 */
- "TLS_DH_anon_WITH_AES_128_GCM_SHA256", /* openssl = ADH-AES128-GCM-SHA256 */
- /* Java 6,7,8 */
- "TLS_ECDH_anon_WITH_RC4_128_SHA" /* openssl = AECDH-RC4-SHA */));
+ "TLS_DH_anon_WITH_AES_128_GCM_SHA256" /* openssl = ADH-AES128-GCM-SHA256 */));
static {
List<String> ciphers = new ArrayList<String>(CIPHERS_JAVA_MOZILLA_INCREASED_SECURITY.size()
| null | val | train | 2014-11-24T02:45:47 | 2014-10-30T15:27:51Z | Scottmitch | val |
netty/netty/3129_3224 | netty/netty | netty/netty/3129 | netty/netty/3224 | [
"timestamp(timedelta=14.0, similarity=0.8448786730231798)"
] | 3c2b5c96fd68e4907a2c0f9cdd6dd539fc7972bd | 9ca26256f6a6ee4f053b40f59ce10c3f9f2e086b | [
"please review #3224\n",
"Fixed by #3224\n"
] | [
"`file = null;` in the `finally` block is evaluated even when `file` is `null`. I would move the null-check out of the try block and return early when `file` is `null`.\n"
] | 2014-12-09T20:41:00Z | [
"defect",
"improvement"
] | Support FileRegion without pre-initialized FileDescriptor | DefaultFileRegion requires a FileDescriptor in its constructor, which means we need to have an opened file handle. In super large workloads where we queue up a lot of file regions, this could lead to too many open files due to the way these file descriptors are cleaned.
In Spark we created a LazyFileRegion implementation (see https://github.com/apache/spark/pull/3172), but that wouldn't work with the native epoll transport. Would be great to push that into Netty and enable support in native transport.
cc @normanmaurer
| [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java",
"transport/src/main/java/io/netty/channel/DefaultFileRegion.java"
] | [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java",
"transport/src/main/java/io/netty/channel/DefaultFileRegion.java"
] | [] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index 2a06c2267cb..24092d03c9e 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -1101,7 +1101,7 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_accept(JNIEnv * env, j
return socketFd;
}
-JNIEXPORT jlong JNICALL Java_io_netty_channel_epoll_Native_sendfile(JNIEnv *env, jclass clazz, jint fd, jobject fileRegion, jlong base_off, jlong off, jlong len) {
+JNIEXPORT jlong JNICALL Java_io_netty_channel_epoll_Native_sendfile0(JNIEnv *env, jclass clazz, jint fd, jobject fileRegion, jlong base_off, jlong off, jlong len) {
jobject fileChannel = (*env)->GetObjectField(env, fileRegion, fileChannelFieldId);
if (fileChannel == NULL) {
throwRuntimeException(env, "Unable to obtain FileChannel from FileRegion");
diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
index a6e9fb04aae..92f6ee627bb 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
@@ -69,7 +69,7 @@ void Java_io_netty_channel_epoll_Native_listen(JNIEnv * env, jclass clazz, jint
jboolean Java_io_netty_channel_epoll_Native_connect(JNIEnv * env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port);
jboolean Java_io_netty_channel_epoll_Native_finishConnect(JNIEnv * env, jclass clazz, jint fd);
jint Java_io_netty_channel_epoll_Native_accept(JNIEnv * env, jclass clazz, jint fd);
-jlong Java_io_netty_channel_epoll_Native_sendfile(JNIEnv *env, jclass clazz, jint fd, jobject fileRegion, jlong base_off, jlong off, jlong len);
+jlong Java_io_netty_channel_epoll_Native_sendfile0(JNIEnv *env, jclass clazz, jint fd, jobject fileRegion, jlong base_off, jlong off, jlong len);
jobject Java_io_netty_channel_epoll_Native_remoteAddress(JNIEnv * env, jclass clazz, jint fd);
jobject Java_io_netty_channel_epoll_Native_localAddress(JNIEnv * env, jclass clazz, jint fd);
void Java_io_netty_channel_epoll_Native_setReuseAddress(JNIEnv * env, jclass clazz, jint fd, jint optval);
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index 5bbd890692b..9b6894ee4fc 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -77,7 +77,16 @@ public static native long writevAddresses(int fd, long memoryAddress, int length
public static native int read(int fd, ByteBuffer buf, int pos, int limit) throws IOException;
public static native int readAddress(int fd, long address, int pos, int limit) throws IOException;
- public static native long sendfile(
+ public static long sendfile(
+ int dest, DefaultFileRegion src, long baseOffset, long offset, long length) throws IOException {
+ // Open the file-region as it may be created via the lazy constructor. This is needed as we directly access
+ // the FileChannel field directly via JNI
+ src.open();
+
+ return sendfile0(dest, src, baseOffset, offset, length);
+ }
+
+ private static native long sendfile0(
int dest, DefaultFileRegion src, long baseOffset, long offset, long length) throws IOException;
public static int sendTo(
diff --git a/transport/src/main/java/io/netty/channel/DefaultFileRegion.java b/transport/src/main/java/io/netty/channel/DefaultFileRegion.java
index c2ac264cc61..dae94e45c28 100644
--- a/transport/src/main/java/io/netty/channel/DefaultFileRegion.java
+++ b/transport/src/main/java/io/netty/channel/DefaultFileRegion.java
@@ -16,15 +16,18 @@
package io.netty.channel;
import io.netty.util.AbstractReferenceCounted;
+import io.netty.util.IllegalReferenceCountException;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
+import java.io.File;
import java.io.IOException;
+import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;
import java.nio.channels.WritableByteChannel;
/**
- * Default {@link FileRegion} implementation which transfer data from a {@link FileChannel}.
+ * Default {@link FileRegion} implementation which transfer data from a {@link FileChannel} or {@link File}.
*
* Be aware that the {@link FileChannel} will be automatically closed once {@link #refCnt()} returns
* {@code 0}.
@@ -32,11 +35,11 @@
public class DefaultFileRegion extends AbstractReferenceCounted implements FileRegion {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultFileRegion.class);
-
- private final FileChannel file;
+ private final File f;
private final long position;
private final long count;
private long transfered;
+ private FileChannel file;
/**
* Create a new instance
@@ -58,6 +61,47 @@ public DefaultFileRegion(FileChannel file, long position, long count) {
this.file = file;
this.position = position;
this.count = count;
+ f = null;
+ }
+
+ /**
+ * Create a new instance using the given {@link File}. The {@link File} will be opened lazily or
+ * explicitly via {@link #open()}.
+ *
+ * @param f the {@link File} which should be transfered
+ * @param position the position from which the transfer should start
+ * @param count the number of bytes to transfer
+ */
+ public DefaultFileRegion(File f, long position, long count) {
+ if (f == null) {
+ throw new NullPointerException("f");
+ }
+ if (position < 0) {
+ throw new IllegalArgumentException("position must be >= 0 but was " + position);
+ }
+ if (count < 0) {
+ throw new IllegalArgumentException("count must be >= 0 but was " + count);
+ }
+ this.position = position;
+ this.count = count;
+ this.f = f;
+ }
+
+ /**
+ * Returns {@code true} if the {@link FileRegion} has a open file-descriptor
+ */
+ public boolean isOpen() {
+ return file != null;
+ }
+
+ /**
+ * Explicitly open the underlying file-descriptor if not done yet.
+ */
+ public void open() throws IOException {
+ if (!isOpen() && refCnt() > 0) {
+ // Only open if this DefaultFileRegion was not released yet.
+ file = new RandomAccessFile(f, "r").getChannel();
+ }
}
@Override
@@ -86,6 +130,11 @@ public long transferTo(WritableByteChannel target, long position) throws IOExcep
if (count == 0) {
return 0L;
}
+ if (refCnt() == 0) {
+ throw new IllegalReferenceCountException(0);
+ }
+ // Call open to make sure fc is initialized. This is a no-oop if we called it before.
+ open();
long written = file.transferTo(this.position + position, count, target);
if (written > 0) {
@@ -97,11 +146,15 @@ public long transferTo(WritableByteChannel target, long position) throws IOExcep
@Override
protected void deallocate() {
try {
- file.close();
+ if (file != null) {
+ file.close();
+ }
} catch (IOException e) {
if (logger.isWarnEnabled()) {
logger.warn("Failed to close a file.", e);
}
+ } finally {
+ file = null;
}
}
}
| null | test | train | 2014-12-09T19:29:37 | 2014-11-11T23:39:33Z | rxin | val |
netty/netty/3188_3238 | netty/netty | netty/netty/3188 | netty/netty/3238 | [
"timestamp(timedelta=61056.0, similarity=0.9268064321116396)"
] | 9a0be053c46c679e5536f19c44c5e5dd4525ec59 | 60cecbd4513b0358a8ab84c2a303e73603073720 | [
"I have looked at the diffs....here are some notes from my first pass.\n\nWe need some changes to be able to send/receive priority frames in any state. I have a question on the spec to clarify the state a bit https://github.com/http2/http2-spec/issues/667. I am looking into adding another `EndPoint.createStream(... | [] | 2014-12-12T13:43:13Z | [
"feature"
] | HTTP/2 Draft 16 | The new draft is out https://tools.ietf.org/html/draft-ietf-httpbis-http2-16.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index 3b0be13d598..b38aee2d6dd 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -34,8 +34,9 @@ public final class Http2CodecUtil {
public static final int CONNECTION_STREAM_ID = 0;
public static final int HTTP_UPGRADE_STREAM_ID = 1;
public static final String HTTP_UPGRADE_SETTINGS_HEADER = "HTTP2-Settings";
- public static final String HTTP_UPGRADE_PROTOCOL_NAME = "h2c-16";
- public static final String TLS_UPGRADE_PROTOCOL_NAME = "h2-16";
+ // Draft 14 is the latest "implementation draft" as defined by https://http2.github.io/
+ public static final String HTTP_UPGRADE_PROTOCOL_NAME = "h2c-14";
+ public static final String TLS_UPGRADE_PROTOCOL_NAME = "h2-14";
public static final int PING_FRAME_PAYLOAD_LENGTH = 8;
public static final short MAX_UNSIGNED_BYTE = 0xFF;
| null | test | train | 2014-12-12T10:01:36 | 2014-11-30T15:42:10Z | Scottmitch | val |
netty/netty/3235_3238 | netty/netty | netty/netty/3235 | netty/netty/3238 | [
"timestamp(timedelta=61083.0, similarity=0.855996269826361)"
] | 9a0be053c46c679e5536f19c44c5e5dd4525ec59 | 60cecbd4513b0358a8ab84c2a303e73603073720 | [
"Addressed as part of https://github.com/netty/netty/pull/3217\n"
] | [] | 2014-12-12T13:43:13Z | [
"feature"
] | HTTP/2 HPACK 10 | [HPACK 10](https://tools.ietf.org/html/draft-ietf-httpbis-header-compression-10) has been released. We should update our [twitter hpack](https://github.com/twitter/hpack) dependencies.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index 3b0be13d598..b38aee2d6dd 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -34,8 +34,9 @@ public final class Http2CodecUtil {
public static final int CONNECTION_STREAM_ID = 0;
public static final int HTTP_UPGRADE_STREAM_ID = 1;
public static final String HTTP_UPGRADE_SETTINGS_HEADER = "HTTP2-Settings";
- public static final String HTTP_UPGRADE_PROTOCOL_NAME = "h2c-16";
- public static final String TLS_UPGRADE_PROTOCOL_NAME = "h2-16";
+ // Draft 14 is the latest "implementation draft" as defined by https://http2.github.io/
+ public static final String HTTP_UPGRADE_PROTOCOL_NAME = "h2c-14";
+ public static final String TLS_UPGRADE_PROTOCOL_NAME = "h2-14";
public static final int PING_FRAME_PAYLOAD_LENGTH = 8;
public static final short MAX_UNSIGNED_BYTE = 0xFF;
| null | train | train | 2014-12-12T10:01:36 | 2014-12-11T21:56:03Z | Scottmitch | val |
netty/netty/3249_3253 | netty/netty | netty/netty/3249 | netty/netty/3253 | [
"timestamp(timedelta=17.0, similarity=0.9700502923967033)"
] | ff9a6e049905d9e4458dad314d19999d8fd9c641 | a261cc3794e4cd4d9a02f5de8b82bc2dffff9af6 | [
"@MichaelScofield sounds right... could you come up with PR ?\n",
"I've created a PR to branch 3.9: \n\nhttps://github.com/netty/netty/pull/3253\n",
"Fixed by #3253\n"
] | [] | 2014-12-16T07:44:38Z | [
"defect"
] | AbstractNioBossPool may initialize more than once? | The following code snippets are in 3.9.4.final.
In org.jboss.netty.channel.socket.nio.AbstractNioBossPool, method
```
protected void init() {
if (initialized) {
throw new IllegalStateException("initialized already");
}
initialized = true;
...
}
```
where the definition of variable "initialized" is
```
private volatile boolean initialized;
```
Is there concurrency hazard that may cause the init() method to be called more than once?
If 2 threads were call the init() method simultaneously. Thread1 checks that if(initialized) is false and hang, Thread2 now checks if(initialized) is false and do the initialization. Then the Thread1 is back to run, and do the initialization again.
I think it's much better to make the variable "initialized" AtomicBoolean, because you can use atomic CAS to do the initialization exactly once:
```
private final AtomicBoolean initialized = new AtomicBoolean(false);
protected void init() {
if (!initialized.compareAndSet(false, true)) {
throw new IllegalStateException("initialized already");
}
....
}
```
| [
"src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioBossPool.java",
"src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioWorkerPool.java"
] | [
"src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioBossPool.java",
"src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioWorkerPool.java"
] | [] | diff --git a/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioBossPool.java b/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioBossPool.java
index 90bdc2c591e..08179061be2 100644
--- a/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioBossPool.java
+++ b/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioBossPool.java
@@ -22,6 +22,7 @@
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
public abstract class AbstractNioBossPool<E extends Boss>
@@ -37,7 +38,7 @@ public abstract class AbstractNioBossPool<E extends Boss>
private final Boss[] bosses;
private final AtomicInteger bossIndex = new AtomicInteger();
private final Executor bossExecutor;
- private volatile boolean initialized;
+ private final AtomicBoolean initialized = new AtomicBoolean(false);
/**
* Create a new instance
@@ -66,10 +67,9 @@ public abstract class AbstractNioBossPool<E extends Boss>
}
protected void init() {
- if (initialized) {
+ if (!initialized.compareAndSet(false, true)) {
throw new IllegalStateException("initialized already");
}
- initialized = true;
for (int i = 0; i < bosses.length; i++) {
bosses[i] = newBoss(bossExecutor);
diff --git a/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioWorkerPool.java b/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioWorkerPool.java
index 6b6f0e1fa5b..ad8787c95ce 100644
--- a/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioWorkerPool.java
+++ b/src/main/java/org/jboss/netty/channel/socket/nio/AbstractNioWorkerPool.java
@@ -24,6 +24,7 @@
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
/**
@@ -43,7 +44,7 @@ public abstract class AbstractNioWorkerPool<E extends AbstractNioWorker>
private final AbstractNioWorker[] workers;
private final AtomicInteger workerIndex = new AtomicInteger();
private final Executor workerExecutor;
- private volatile boolean initialized;
+ private final AtomicBoolean initialized = new AtomicBoolean(false);
/**
* Create a new instance
@@ -71,12 +72,10 @@ public abstract class AbstractNioWorkerPool<E extends AbstractNioWorker>
}
protected void init() {
- if (initialized) {
+ if (!initialized.compareAndSet(false, true)) {
throw new IllegalStateException("initialized already");
}
- initialized = true;
-
for (int i = 0; i < workers.length; i++) {
workers[i] = newWorker(workerExecutor);
}
| null | train | train | 2014-12-08T08:06:37 | 2014-12-15T10:07:03Z | MichaelScofield | val |
netty/netty/3194_3254 | netty/netty | netty/netty/3194 | netty/netty/3254 | [
"timestamp(timedelta=372.0, similarity=0.8748363603579907)"
] | 770126f7073603ef3ae96d9923029b5a4a91ebc9 | 001097507cd103a24dcfafcda8dd31dfb574ff5b | [
"@Scottmitch thanks for the heads-up buddy.\n",
"@normanmaurer - Sure I can take this, but I have some outstanding questions for the alpn-boot folks. They are not planning on back porting the alpn-boot bug fix because there was some updates in OpenJDK SSL related code (not related to this bug). Seems a bit lim... | [] | 2014-12-16T14:25:43Z | [
"feature"
] | Jetty alpn-boot new version | Due to https://github.com/jetty-project/jetty-alpn/issues/5 there will be a new alpn-boot release. The pom files should be updated to reflect this.
| [
"pom.xml"
] | [
"pom.xml"
] | [] | diff --git a/pom.xml b/pom.xml
index 0f64bb1e754..594f7699d61 100644
--- a/pom.xml
+++ b/pom.xml
@@ -108,9 +108,54 @@
<properties>
<!-- Our Javadoc has poor enough quality to fail the build thanks to JDK8 javadoc which got more strict. -->
<maven.javadoc.failOnError>false</maven.javadoc.failOnError>
- <!-- npn-boot does not work with JDK 8 -->
+ </properties>
+ </profile>
+ <profile>
+ <id>alpn-8</id>
+ <activation>
+ <property>
+ <name>java.version</name>
+ <value>1.8</value>
+ </property>
+ </activation>
+ <properties>
+ <jetty.alpn.version>8.1.0.v20141016</jetty.alpn.version>
+ </properties>
+ </profile>
+ <profile>
+ <id>alpn-8u05</id>
+ <activation>
+ <property>
+ <name>java.version</name>
+ <value>1.8.0_05</value>
+ </property>
+ </activation>
+ <properties>
+ <jetty.alpn.version>8.1.0.v20141016</jetty.alpn.version>
+ </properties>
+ </profile>
+ <profile>
+ <id>alpn-8u11</id>
+ <activation>
+ <property>
+ <name>java.version</name>
+ <value>1.8.0_11</value>
+ </property>
+ </activation>
+ <properties>
+ <jetty.alpn.version>8.1.0.v20141016</jetty.alpn.version>
+ </properties>
+ </profile>
+ <profile>
+ <id>alpn-8u20</id>
+ <activation>
+ <property>
+ <name>java.version</name>
+ <value>1.8.0_20</value>
+ </property>
+ </activation>
+ <properties>
<jetty.alpn.version>8.1.0.v20141016</jetty.alpn.version>
- <argLine.bootcp>-Xbootclasspath/p:${jetty.alpn.path}</argLine.bootcp>
</properties>
</profile>
<profile>
@@ -122,7 +167,7 @@
</property>
</activation>
<properties>
- <jetty.alpn.version>8.1.1.v20141016</jetty.alpn.version>
+ <jetty.alpn.version>8.1.2.v20141202</jetty.alpn.version>
</properties>
</profile>
<profile>
@@ -296,7 +341,6 @@
<properties>
<jetty.npn.version>1.1.6.v20130911</jetty.npn.version>
<jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
</properties>
</profile>
<profile>
@@ -310,7 +354,6 @@
<properties>
<jetty.npn.version>1.1.6.v20130911</jetty.npn.version>
<jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
</properties>
</profile>
<profile>
@@ -324,7 +367,6 @@
<properties>
<jetty.npn.version>1.1.6.v20130911</jetty.npn.version>
<jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
</properties>
</profile>
<profile>
@@ -338,7 +380,6 @@
<properties>
<jetty.npn.version>1.1.8.v20141013</jetty.npn.version>
<jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
</properties>
</profile>
<profile>
@@ -352,7 +393,6 @@
<properties>
<jetty.npn.version>1.1.8.v20141013</jetty.npn.version>
<jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
</properties>
</profile>
<profile>
@@ -366,7 +406,6 @@
<properties>
<jetty.npn.version>1.1.8.v20141013</jetty.npn.version>
<jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
</properties>
</profile>
<profile>
@@ -380,7 +419,6 @@
<properties>
<jetty.npn.version>1.1.8.v20141013</jetty.npn.version>
<jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
</properties>
</profile>
<profile>
@@ -393,8 +431,7 @@
</activation>
<properties>
<jetty.npn.version>1.1.9.v20141016</jetty.npn.version>
- <jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
+ <jetty.alpn.version>7.1.2.v20141202</jetty.alpn.version>
</properties>
</profile>
<profile>
@@ -407,8 +444,7 @@
</activation>
<properties>
<jetty.npn.version>1.1.9.v20141016</jetty.npn.version>
- <jetty.alpn.version>7.1.0.v20141016</jetty.alpn.version>
- <!-- Defer definition of Xbootclasspath to default or forcenpn profile -->
+ <jetty.alpn.version>7.1.2.v20141202</jetty.alpn.version>
</properties>
</profile>
<profile>
@@ -436,7 +472,7 @@
<jboss.marshalling.version>1.3.18.GA</jboss.marshalling.version>
<jetty.npn.version>1.1.9.v20141016</jetty.npn.version>
<jetty.npn.path>${settings.localRepository}/org/mortbay/jetty/npn/npn-boot/${jetty.npn.version}/npn-boot-${jetty.npn.version}.jar</jetty.npn.path>
- <jetty.alpn.version>8.1.0.v20141016</jetty.alpn.version>
+ <jetty.alpn.version>8.1.2.v20141202</jetty.alpn.version>
<jetty.alpn.path>${settings.localRepository}/org/mortbay/jetty/alpn/alpn-boot/${jetty.alpn.version}/alpn-boot-${jetty.alpn.version}.jar</jetty.alpn.path>
<argLine.common>
-server
| null | train | train | 2014-12-16T12:34:01 | 2014-12-02T12:07:25Z | Scottmitch | val |
netty/netty/3112_3262 | netty/netty | netty/netty/3112 | netty/netty/3262 | [
"timestamp(timedelta=10.0, similarity=0.8847084238518407)"
] | ded37d889338875c1c025deebb1a8a9d44a798f7 | 706aaf08ed09a275594a7ec8ffe4b33a6fd2de3e | [
"See http://linux.die.net/man/7/tcp\n",
"Sounds awesome.\n",
"@normanmaurer would you log them or so?\n",
"@trustin working on it...\n\n@daschl nope just expose it on the EpollSocketChannel. You can do whatever you want to do with it\n"
] | [
"Because all the fields of TCP_INFO are unsigned, we should use wider types - int for u8, int for u16, long for u32.\n",
"good idea.\n",
"@trustin do we want to use byte for all u8 methods ? or better stick with int ?\n",
"Should be short or int.\n",
"Consistency: `JNIEnv* env`\n"
] | 2014-12-19T08:50:30Z | [
"feature"
] | Add support for TCP_INFO in native transport | It would be nice to support TCP_INFO in the native transport to get infos about the socket.
| [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java"
] | [
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c",
"transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java",
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollTcpInfo.java",
"t... | [
"transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java"
] | diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
index 137187977de..3aa984559e6 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.c
@@ -1297,6 +1297,48 @@ JNIEXPORT jint JNICALL Java_io_netty_channel_epoll_Native_getTcpKeepCnt(JNIEnv*
return optval;
}
+JNIEXPORT void JNICALL Java_io_netty_channel_epoll_Native_tcpInfo0(JNIEnv* env, jclass clazz, jint fd, jintArray array) {
+ struct tcp_info tcp_info;
+ if (getOption(env, fd, SOL_TCP, TCP_INFO, &tcp_info, sizeof(tcp_info)) == -1) {
+ return;
+ }
+ unsigned int cArray[32];
+ cArray[0] = tcp_info.tcpi_state;
+ cArray[1] = tcp_info.tcpi_ca_state;
+ cArray[2] = tcp_info.tcpi_retransmits;
+ cArray[3] = tcp_info.tcpi_probes;
+ cArray[4] = tcp_info.tcpi_backoff;
+ cArray[5] = tcp_info.tcpi_options;
+ cArray[6] = tcp_info.tcpi_snd_wscale;
+ cArray[7] = tcp_info.tcpi_rcv_wscale;
+ cArray[8] = tcp_info.tcpi_rto;
+ cArray[9] = tcp_info.tcpi_ato;
+ cArray[10] = tcp_info.tcpi_snd_mss;
+ cArray[11] = tcp_info.tcpi_rcv_mss;
+ cArray[12] = tcp_info.tcpi_unacked;
+ cArray[13] = tcp_info.tcpi_sacked;
+ cArray[14] = tcp_info.tcpi_lost;
+ cArray[15] = tcp_info.tcpi_retrans;
+ cArray[16] = tcp_info.tcpi_fackets;
+ cArray[17] = tcp_info.tcpi_last_data_sent;
+ cArray[18] = tcp_info.tcpi_last_ack_sent;
+ cArray[19] = tcp_info.tcpi_last_data_recv;
+ cArray[20] = tcp_info.tcpi_last_ack_recv;
+ cArray[21] = tcp_info.tcpi_pmtu;
+ cArray[22] = tcp_info.tcpi_rcv_ssthresh;
+ cArray[23] = tcp_info.tcpi_rtt;
+ cArray[24] = tcp_info.tcpi_rttvar;
+ cArray[25] = tcp_info.tcpi_snd_ssthresh;
+ cArray[26] = tcp_info.tcpi_snd_cwnd;
+ cArray[27] = tcp_info.tcpi_advmss;
+ cArray[28] = tcp_info.tcpi_reordering;
+ cArray[29] = tcp_info.tcpi_rcv_rtt;
+ cArray[30] = tcp_info.tcpi_rcv_space;
+ cArray[31] = tcp_info.tcpi_total_retrans;
+
+ (*env)->SetIntArrayRegion(env, array, 0, 32, cArray);
+}
+
JNIEXPORT jstring JNICALL Java_io_netty_channel_epoll_Native_kernelVersion(JNIEnv* env, jclass clazz) {
struct utsname name;
diff --git a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
index 58e82bfb8f1..7c0eaafdca7 100644
--- a/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
+++ b/transport-native-epoll/src/main/c/io_netty_channel_epoll_Native.h
@@ -38,75 +38,81 @@
#define UIO_MAXIOV 1024
#endif /* UIO_MAXIOV */
-jint Java_io_netty_channel_epoll_Native_eventFd(JNIEnv * env, jclass clazz);
-void Java_io_netty_channel_epoll_Native_eventFdWrite(JNIEnv * env, jclass clazz, jint fd, jlong value);
-void Java_io_netty_channel_epoll_Native_eventFdRead(JNIEnv * env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_epollCreate(JNIEnv * env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_epollWait(JNIEnv * env, jclass clazz, jint efd, jlongArray events, jint timeout);
-void Java_io_netty_channel_epoll_Native_epollCtlAdd(JNIEnv * env, jclass clazz, jint efd, jint fd, jint flags, jint id);
-void Java_io_netty_channel_epoll_Native_epollCtlMod(JNIEnv * env, jclass clazz, jint efd, jint fd, jint flags, jint id);
-void Java_io_netty_channel_epoll_Native_epollCtlDel(JNIEnv * env, jclass clazz, jint efd, jint fd);
-jint Java_io_netty_channel_epoll_Native_write0(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
-jint Java_io_netty_channel_epoll_Native_writeAddress0(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
-jlong Java_io_netty_channel_epoll_Native_writev0(JNIEnv * env, jclass clazz, jint fd, jobjectArray buffers, jint offset, jint length);
-jlong Java_io_netty_channel_epoll_Native_writevAddresses0(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint length);
-jint Java_io_netty_channel_epoll_Native_sendTo(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
-jint Java_io_netty_channel_epoll_Native_sendToAddress(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
-jint Java_io_netty_channel_epoll_Native_sendToAddresses(JNIEnv * env, jclass clazz, jint fd, jlong memoryAddress, jint length, jbyteArray address, jint scopeId, jint port);
-jint Java_io_netty_channel_epoll_Native_sendmmsg(JNIEnv * env, jclass clazz, jint fd, jobjectArray packets, jint offset, jint len);
+jint Java_io_netty_channel_epoll_Native_eventFd(JNIEnv* env, jclass clazz);
+void Java_io_netty_channel_epoll_Native_eventFdWrite(JNIEnv* env, jclass clazz, jint fd, jlong value);
+void Java_io_netty_channel_epoll_Native_eventFdRead(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_epollCreate(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_epollWait(JNIEnv* env, jclass clazz, jint efd, jlongArray events, jint timeout);
+void Java_io_netty_channel_epoll_Native_epollCtlAdd(JNIEnv* env, jclass clazz, jint efd, jint fd, jint flags, jint id);
+void Java_io_netty_channel_epoll_Native_epollCtlMod(JNIEnv* env, jclass clazz, jint efd, jint fd, jint flags, jint id);
+void Java_io_netty_channel_epoll_Native_epollCtlDel(JNIEnv* env, jclass clazz, jint efd, jint fd);
+jint Java_io_netty_channel_epoll_Native_write0(JNIEnv* env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
+jint Java_io_netty_channel_epoll_Native_writeAddress0(JNIEnv* env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
+jlong Java_io_netty_channel_epoll_Native_writev0(JNIEnv* env, jclass clazz, jint fd, jobjectArray buffers, jint offset, jint length);
+jlong Java_io_netty_channel_epoll_Native_writevAddresses0(JNIEnv* env, jclass clazz, jint fd, jlong memoryAddress, jint length);
+jint Java_io_netty_channel_epoll_Native_sendTo(JNIEnv* env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_sendToAddress(JNIEnv* env, jclass clazz, jint fd, jlong memoryAddress, jint pos, jint limit, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_sendToAddresses(JNIEnv* env, jclass clazz, jint fd, jlong memoryAddress, jint length, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_sendmmsg(JNIEnv* env, jclass clazz, jint fd, jobjectArray packets, jint offset, jint len);
-jint Java_io_netty_channel_epoll_Native_read0(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
-jint Java_io_netty_channel_epoll_Native_readAddress0(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
-jobject Java_io_netty_channel_epoll_Native_recvFrom(JNIEnv * env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
-jobject Java_io_netty_channel_epoll_Native_recvFromAddress(JNIEnv * env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
-jint Java_io_netty_channel_epoll_Native_close0(JNIEnv * env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_shutdown0(JNIEnv * env, jclass clazz, jint fd, jboolean read, jboolean write);
-jint Java_io_netty_channel_epoll_Native_socketStream(JNIEnv * env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_socketDgram(JNIEnv * env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_read0(JNIEnv* env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
+jint Java_io_netty_channel_epoll_Native_readAddress0(JNIEnv* env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
+jobject Java_io_netty_channel_epoll_Native_recvFrom(JNIEnv* env, jclass clazz, jint fd, jobject jbuffer, jint pos, jint limit);
+jobject Java_io_netty_channel_epoll_Native_recvFromAddress(JNIEnv* env, jclass clazz, jint fd, jlong address, jint pos, jint limit);
+jint Java_io_netty_channel_epoll_Native_close0(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_shutdown0(JNIEnv* env, jclass clazz, jint fd, jboolean read, jboolean write);
+jint Java_io_netty_channel_epoll_Native_socketStream(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_socketDgram(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_socketDomain(JNIEnv* env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_bind(JNIEnv * env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port);
-jint Java_io_netty_channel_epoll_Native_listen0(JNIEnv * env, jclass clazz, jint fd, jint backlog);
-jint Java_io_netty_channel_epoll_Native_connect(JNIEnv * env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port);
-jint Java_io_netty_channel_epoll_Native_finishConnect0(JNIEnv * env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_accept0(JNIEnv * env, jclass clazz, jint fd);
-jlong Java_io_netty_channel_epoll_Native_sendfile0(JNIEnv *env, jclass clazz, jint fd, jobject fileRegion, jlong base_off, jlong off, jlong len);
-jbyteArray Java_io_netty_channel_epoll_Native_remoteAddress0(JNIEnv * env, jclass clazz, jint fd);
-jbyteArray Java_io_netty_channel_epoll_Native_localAddress0(JNIEnv * env, jclass clazz, jint fd);
-void Java_io_netty_channel_epoll_Native_setReuseAddress(JNIEnv * env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setReusePort(JNIEnv * env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setTcpNoDelay(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setReceiveBufferSize(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setSendBufferSize(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setKeepAlive(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setTcpCork(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setSoLinger(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setTrafficClass(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setBroadcast(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setTcpKeepIdle(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setTcpKeepIntvl(JNIEnv *env, jclass clazz, jint fd, jint optval);
-void Java_io_netty_channel_epoll_Native_setTcpKeepCnt(JNIEnv *env, jclass clazz, jint fd, jint optval);
+jint Java_io_netty_channel_epoll_Native_bind(JNIEnv* env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_bindDomainSocket(JNIEnv* env, jclass clazz, jint fd, jstring address);
+jint Java_io_netty_channel_epoll_Native_recvFd0(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_sendFd0(JNIEnv* env, jclass clazz, jint socketFd, jint fd);
-jint Java_io_netty_channel_epoll_Native_isReuseAddresss(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_isReusePort(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_isTcpNoDelay(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_getReceiveBufferSize(JNIEnv * env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_getSendBufferSize(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_isTcpCork(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_getSoLinger(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_getTrafficClass(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_isBroadcast(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_getTcpKeepIdle(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_getTcpKeepIntvl(JNIEnv *env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_getTcpKeepCnt(JNIEnv *env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_listen0(JNIEnv* env, jclass clazz, jint fd, jint backlog);
+jint Java_io_netty_channel_epoll_Native_connect(JNIEnv* env, jclass clazz, jint fd, jbyteArray address, jint scopeId, jint port);
+jint Java_io_netty_channel_epoll_Native_connectDomainSocket(JNIEnv* env, jclass clazz, jint fd, jstring address);
+jint Java_io_netty_channel_epoll_Native_finishConnect0(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_accept0(JNIEnv* env, jclass clazz, jint fd);
+jlong Java_io_netty_channel_epoll_Native_sendfile0(JNIEnv* env, jclass clazz, jint fd, jobject fileRegion, jlong base_off, jlong off, jlong len);
+jbyteArray Java_io_netty_channel_epoll_Native_remoteAddress0(JNIEnv* env, jclass clazz, jint fd);
+jbyteArray Java_io_netty_channel_epoll_Native_localAddress0(JNIEnv* env, jclass clazz, jint fd);
+void Java_io_netty_channel_epoll_Native_setReuseAddress(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setReusePort(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setTcpNoDelay(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setReceiveBufferSize(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setSendBufferSize(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setKeepAlive(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setTcpCork(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setSoLinger(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setTrafficClass(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setBroadcast(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setTcpKeepIdle(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setTcpKeepIntvl(JNIEnv* env, jclass clazz, jint fd, jint optval);
+void Java_io_netty_channel_epoll_Native_setTcpKeepCnt(JNIEnv* env, jclass clazz, jint fd, jint optval);
-jstring Java_io_netty_channel_epoll_Native_kernelVersion(JNIEnv *env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_iovMax(JNIEnv *env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_uioMaxIov(JNIEnv *env, jclass clazz);
-jboolean Java_io_netty_channel_epoll_Native_isSupportingSendmmsg(JNIEnv *env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_isReuseAddresss(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_isReusePort(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_isTcpNoDelay(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_getReceiveBufferSize(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_getSendBufferSize(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_isTcpCork(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_getSoLinger(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_getTrafficClass(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_isBroadcast(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_getTcpKeepIdle(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_getTcpKeepIntvl(JNIEnv* env, jclass clazz, jint fd);
+jint Java_io_netty_channel_epoll_Native_getTcpKeepCnt(JNIEnv* env, jclass clazz, jint fd);
-jint Java_io_netty_channel_epoll_Native_errnoEBADF(JNIEnv *env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_errnoEPIPE(JNIEnv *env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_errnoEAGAIN(JNIEnv *env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_errnoEWOULDBLOCK(JNIEnv *env, jclass clazz);
-jint Java_io_netty_channel_epoll_Native_errnoEINPROGRESS(JNIEnv *env, jclass clazz);
-jstring Java_io_netty_channel_epoll_Native_strError(JNIEnv *env, jclass clazz, jint err);
+jstring Java_io_netty_channel_epoll_Native_kernelVersion(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_iovMax(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_uioMaxIov(JNIEnv* env, jclass clazz);
+jboolean Java_io_netty_channel_epoll_Native_isSupportingSendmmsg(JNIEnv* env, jclass clazz);
+
+jint Java_io_netty_channel_epoll_Native_errnoEBADF(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_errnoEPIPE(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_errnoEAGAIN(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_errnoEWOULDBLOCK(JNIEnv* env, jclass clazz);
+jint Java_io_netty_channel_epoll_Native_errnoEINPROGRESS(JNIEnv* env, jclass clazz);
+jstring Java_io_netty_channel_epoll_Native_strError(JNIEnv* env, jclass clazz, jint err);
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
index bfe1f528698..0d81dbff7d4 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollSocketChannel.java
@@ -82,6 +82,22 @@ public EpollSocketChannel() {
config = new EpollSocketChannelConfig(this);
}
+ /**
+ * Returns the {@code TCP_INFO} for the current socket. See <a href="http://linux.die.net/man/7/tcp">man 7 tcp</a>.
+ */
+ public EpollTcpInfo tcpInfo() {
+ return tcpInfo(new EpollTcpInfo());
+ }
+
+ /**
+ * Updates and returns the {@code TCP_INFO} for the current socket.
+ * See <a href="http://linux.die.net/man/7/tcp">man 7 tcp</a>.
+ */
+ public EpollTcpInfo tcpInfo(EpollTcpInfo info) {
+ Native.tcpInfo(fd, info);
+ return info;
+ }
+
@Override
protected AbstractEpollUnsafe newUnsafe() {
return new EpollSocketUnsafe();
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollTcpInfo.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollTcpInfo.java
new file mode 100644
index 00000000000..9b47a19c52b
--- /dev/null
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollTcpInfo.java
@@ -0,0 +1,193 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+/**
+ * <p>
+ * struct tcp_info
+ * {
+ * __u8 tcpi_state;
+ * __u8 tcpi_ca_state;
+ * __u8 tcpi_retransmits;
+ * __u8 tcpi_probes;
+ * __u8 tcpi_backoff;
+ * __u8 tcpi_options;
+ * __u8 tcpi_snd_wscale : 4, tcpi_rcv_wscale : 4;
+ *
+ * __u32 tcpi_rto;
+ * __u32 tcpi_ato;
+ * __u32 tcpi_snd_mss;
+ * __u32 tcpi_rcv_mss;
+ *
+ * __u32 tcpi_unacked;
+ * __u32 tcpi_sacked;
+ * __u32 tcpi_lost;
+ * __u32 tcpi_retrans;
+ * __u32 tcpi_fackets;
+ *
+ * __u32 tcpi_last_data_sent;
+ * __u32 tcpi_last_ack_sent;
+ * __u32 tcpi_last_data_recv;
+ * __u32 tcpi_last_ack_recv;
+ *
+ * __u32 tcpi_pmtu;
+ * __u32 tcpi_rcv_ssthresh;
+ * __u32 tcpi_rtt;
+ * __u32 tcpi_rttvar;
+ * __u32 tcpi_snd_ssthresh;
+ * __u32 tcpi_snd_cwnd;
+ * __u32 tcpi_advmss;
+ * __u32 tcpi_reordering;
+ *
+ * __u32 tcpi_rcv_rtt;
+ * __u32 tcpi_rcv_space;
+ *
+ * __u32 tcpi_total_retrans;
+ * };
+ * </p>
+ */
+public final class EpollTcpInfo {
+
+ final int[] info = new int[32];
+
+ public int state() {
+ return info[0] & 0xFF;
+ }
+
+ public int caState() {
+ return info[1] & 0xFF;
+ }
+
+ public int retransmits() {
+ return info[2] & 0xFF;
+ }
+
+ public int probes() {
+ return info[3] & 0xFF;
+ }
+
+ public int backoff() {
+ return info[4] & 0xFF;
+ }
+
+ public int options() {
+ return info[5] & 0xFF;
+ }
+
+ public int sndWscale() {
+ return info[6] & 0xFF;
+ }
+
+ public int rcvWscale() {
+ return info[7] & 0xFF;
+ }
+
+ public long rto() {
+ return info[8] & 0xFFFFFFFFL;
+ }
+
+ public long ato() {
+ return info[9] & 0xFFFFFFFFL;
+ }
+
+ public long sndMss() {
+ return info[10] & 0xFFFFFFFFL;
+ }
+
+ public long rcvMss() {
+ return info[11] & 0xFFFFFFFFL;
+ }
+
+ public long unacked() {
+ return info[12] & 0xFFFFFFFFL;
+ }
+
+ public long sacked() {
+ return info[13] & 0xFFFFFFFFL;
+ }
+
+ public long lost() {
+ return info[14] & 0xFFFFFFFFL;
+ }
+
+ public long retrans() {
+ return info[15] & 0xFFFFFFFFL;
+ }
+
+ public long fackets() {
+ return info[16] & 0xFFFFFFFFL;
+ }
+
+ public long lastDataSent() {
+ return info[17] & 0xFFFFFFFFL;
+ }
+
+ public long lastAckSent() {
+ return info[18] & 0xFFFFFFFFL;
+ }
+
+ public long lastDataRecv() {
+ return info[19] & 0xFFFFFFFFL;
+ }
+
+ public long lastAckRecv() {
+ return info[20] & 0xFFFFFFFFL;
+ }
+
+ public long pmtu() {
+ return info[21] & 0xFFFFFFFFL;
+ }
+
+ public long rcvSsthresh() {
+ return info[22] & 0xFFFFFFFFL;
+ }
+
+ public long rtt() {
+ return info[23] & 0xFFFFFFFFL;
+ }
+
+ public long rttvar() {
+ return info[24] & 0xFFFFFFFFL;
+ }
+
+ public long sndSsthresh() {
+ return info[25] & 0xFFFFFFFFL;
+ }
+
+ public long sndCwnd() {
+ return info[26] & 0xFFFFFFFFL;
+ }
+
+ public long advmss() {
+ return info[27] & 0xFFFFFFFFL;
+ }
+
+ public long reordering() {
+ return info[28] & 0xFFFFFFFFL;
+ }
+
+ public long rcvRtt() {
+ return info[29] & 0xFFFFFFFFL;
+ }
+
+ public long rcvSpace() {
+ return info[30] & 0xFFFFFFFFL;
+ }
+
+ public long totalRetrans() {
+ return info[31] & 0xFFFFFFFFL;
+ }
+}
diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
index 4986e3f20b0..46c1903da05 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/Native.java
@@ -505,6 +505,12 @@ public static void shutdown(int fd, boolean read, boolean write) throws IOExcept
public static native void setTcpKeepIntvl(int fd, int seconds);
public static native void setTcpKeepCnt(int fd, int probes);
+ public static void tcpInfo(int fd, EpollTcpInfo info) {
+ tcpInfo0(fd, info.info);
+ }
+
+ private static native void tcpInfo0(int fd, int[] array);
+
private static NativeInetAddress toNativeInetAddress(InetAddress addr) {
byte[] bytes = addr.getAddress();
if (addr instanceof Inet6Address) {
| diff --git a/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java
new file mode 100644
index 00000000000..9637006217e
--- /dev/null
+++ b/transport-native-epoll/src/test/java/io/netty/channel/epoll/EpollSocketChannelTest.java
@@ -0,0 +1,101 @@
+/*
+ * Copyright 2014 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.channel.epoll;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.channel.ChannelInboundHandlerAdapter;
+import io.netty.channel.EventLoopGroup;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.net.InetSocketAddress;
+
+public class EpollSocketChannelTest {
+
+ @Test
+ public void testTcpInfo() throws Exception {
+ EventLoopGroup group = new EpollEventLoopGroup(1);
+
+ try {
+ Bootstrap bootstrap = new Bootstrap();
+ EpollSocketChannel ch = (EpollSocketChannel) bootstrap.group(group)
+ .channel(EpollSocketChannel.class)
+ .handler(new ChannelInboundHandlerAdapter())
+ .bind(new InetSocketAddress(0)).syncUninterruptibly().channel();
+ EpollTcpInfo info = ch.tcpInfo();
+ assertTcpInfo0(info);
+ ch.close().syncUninterruptibly();
+ } finally {
+ group.shutdownGracefully();
+ }
+ }
+
+ @Test
+ public void testTcpInfoReuse() throws Exception {
+ EventLoopGroup group = new EpollEventLoopGroup(1);
+
+ try {
+ Bootstrap bootstrap = new Bootstrap();
+ EpollSocketChannel ch = (EpollSocketChannel) bootstrap.group(group)
+ .channel(EpollSocketChannel.class)
+ .handler(new ChannelInboundHandlerAdapter())
+ .bind(new InetSocketAddress(0)).syncUninterruptibly().channel();
+ EpollTcpInfo info = new EpollTcpInfo();
+ ch.tcpInfo(info);
+ assertTcpInfo0(info);
+ ch.close().syncUninterruptibly();
+ } finally {
+ group.shutdownGracefully();
+ }
+ }
+
+ private static void assertTcpInfo0(EpollTcpInfo info) throws Exception {
+ Assert.assertNotNull(info);
+
+ Assert.assertTrue(info.state() >= 0);
+ Assert.assertTrue(info.caState() >= 0);
+ Assert.assertTrue(info.retransmits() >= 0);
+ Assert.assertTrue(info.probes() >= 0);
+ Assert.assertTrue(info.backoff() >= 0);
+ Assert.assertTrue(info.options() >= 0);
+ Assert.assertTrue(info.sndWscale() >= 0);
+ Assert.assertTrue(info.rcvWscale() >= 0);
+ Assert.assertTrue(info.rto() >= 0);
+ Assert.assertTrue(info.ato() >= 0);
+ Assert.assertTrue(info.sndMss() >= 0);
+ Assert.assertTrue(info.rcvMss() >= 0);
+ Assert.assertTrue(info.unacked() >= 0);
+ Assert.assertTrue(info.sacked() >= 0);
+ Assert.assertTrue(info.lost() >= 0);
+ Assert.assertTrue(info.retrans() >= 0);
+ Assert.assertTrue(info.fackets() >= 0);
+ Assert.assertTrue(info.lastDataSent() >= 0);
+ Assert.assertTrue(info.lastAckSent() >= 0);
+ Assert.assertTrue(info.lastDataRecv() >= 0);
+ Assert.assertTrue(info.lastAckRecv() >= 0);
+ Assert.assertTrue(info.pmtu() >= 0);
+ Assert.assertTrue(info.rcvSsthresh() >= 0);
+ Assert.assertTrue(info.rtt() >= 0);
+ Assert.assertTrue(info.rttvar() >= 0);
+ Assert.assertTrue(info.sndSsthresh() >= 0);
+ Assert.assertTrue(info.sndCwnd() >= 0);
+ Assert.assertTrue(info.advmss() >= 0);
+ Assert.assertTrue(info.reordering() >= 0);
+ Assert.assertTrue(info.rcvRtt() >= 0);
+ Assert.assertTrue(info.rcvSpace() >= 0);
+ Assert.assertTrue(info.totalRetrans() >= 0);
+ }
+}
| val | train | 2015-01-26T21:16:18 | 2014-11-06T19:58:41Z | normanmaurer | val |
netty/netty/3326_3330 | netty/netty | netty/netty/3326 | netty/netty/3330 | [
"timestamp(timedelta=75.0, similarity=0.8422273553057141)"
] | fb0c78885f1c9a1299357ced5e6d73941588c826 | 3935bc9c9055c5af526a5f2cc7d5044a9025581f | [
"@dittos I confirm.\nThe issue is in splitMultipartHeader where, once Content-Disposition is found, we split the data according to ';'. In this case, this is wrong.\nNothing prevents a filename to include a \";\" (even if ';' is really not compatible with most of OS like Linux).\nI do not have time right now to mak... | [] | 2015-01-15T08:39:12Z | [
"defect"
] | HTTP multipart request with filename containing ";" causes exception | Netty version: 3.10.0.Final
When I send HTTP request something like:
```
POST / HTTP/1.1
Content-Type: multipart/form-data; boundary=dLV9Wyq26L_-JQxk6ferf-RT153LhOO
--dLV9Wyq26L_-JQxk6ferf-RT153LhOO
Content-Disposition: form-data; name="file"; filename="tmp;0.txt"
Content-Type: image/gif
asdf
--dLV9Wyq26L_-JQxk6ferf-RT153LhOO--
```
It causes the following stack trace.
```
java.lang.ArrayIndexOutOfBoundsException: 1
at org.jboss.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.findMultipartDisposition(HttpPostMultipartRequestDecoder.java:540)
at org.jboss.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.decodeMultipart(HttpPostMultipartRequestDecoder.java:345)
at org.jboss.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.findMultipartDelimiter(HttpPostMultipartRequestDecoder.java:482)
at org.jboss.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.decodeMultipart(HttpPostMultipartRequestDecoder.java:332)
at org.jboss.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.parseBodyMultipart(HttpPostMultipartRequestDecoder.java:294)
at org.jboss.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.parseBody(HttpPostMultipartRequestDecoder.java:265)
at org.jboss.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.<init>(HttpPostMultipartRequestDecoder.java:167)
at org.jboss.netty.handler.codec.http.multipart.HttpPostRequestDecoder.<init>(HttpPostRequestDecoder.java:80)
at org.jboss.netty.handler.codec.http.multipart.HttpPostRequestDecoder.<init>(HttpPostRequestDecoder.java:56)
...
```
Test case (add to `HttpPostRequestDecoderTest`):
``` java
final String boundary = "dLV9Wyq26L_-JQxk6ferf-RT153LhOO";
final DefaultHttpRequest req = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.POST,
"http://localhost");
req.headers().add(HttpHeaders.Names.CONTENT_TYPE, "multipart/form-data; boundary=" + boundary);
// Force to use memory-based data.
final DefaultHttpDataFactory inMemoryFactory = new DefaultHttpDataFactory(false);
final String data = "asdf";
final String filename = "tmp;0.txt";
final String body =
"--" + boundary + "\r\n" +
"Content-Disposition: form-data; name=\"file\"; filename=\"" + filename + "\"\r\n" +
"Content-Type: image/gif\r\n" +
"\r\n" +
data + "\r\n" +
"--" + boundary + "--\r\n";
req.setContent(ChannelBuffers.wrappedBuffer(body.getBytes(CharsetUtil.UTF_8.name())));
// Create decoder instance to test.
final HttpPostRequestDecoder decoder = new HttpPostRequestDecoder(inMemoryFactory, req);
assertFalse(decoder.getBodyHttpDatas().isEmpty());
assertEquals(filename, ((MemoryFileUpload) decoder.getBodyHttpDatas().get(0)).getFilename());
```
Although I only tested 3.x release, but it seems that also 4.x/5.0 have the same bug.
| [
"codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java"
] | [
"codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java"
] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java
index 012cdb339cf..e242937ad0c 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java
@@ -1784,7 +1784,7 @@ private static String[] splitMultipartHeader(String sb) {
String svalue = sb.substring(valueStart, valueEnd);
String[] values;
if (svalue.indexOf(';') >= 0) {
- values = StringUtil.split(svalue, ';');
+ values = splitMultipartHeaderValues(svalue);
} else {
values = StringUtil.split(svalue, ',');
}
@@ -1797,4 +1797,38 @@ private static String[] splitMultipartHeader(String sb) {
}
return array;
}
+
+ /**
+ * Split one header value in Multipart
+ * @return an array of String where values that were separated by ';' or ','
+ */
+ private static String[] splitMultipartHeaderValues(String svalue) {
+ ArrayList<String> values = new ArrayList<String>(1);
+ boolean inQuote = false;
+ boolean escapeNext = false;
+ int start = 0;
+ for (int i = 0; i < svalue.length(); i++) {
+ char c = svalue.charAt(i);
+ if (inQuote) {
+ if (escapeNext) {
+ escapeNext = false;
+ } else {
+ if (c == '\\') {
+ escapeNext = true;
+ } else if (c == '"') {
+ inQuote = false;
+ }
+ }
+ } else {
+ if (c == '"') {
+ inQuote = true;
+ } else if (c == ';') {
+ values.add(svalue.substring(start, i));
+ start = i + 1;
+ }
+ }
+ }
+ values.add(svalue.substring(start));
+ return values.toArray(new String[values.size()]);
+ }
}
| diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java
index a6514df2932..9aa46337b11 100644
--- a/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java
+++ b/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestDecoderTest.java
@@ -321,4 +321,30 @@ public void testChunkCorrect() throws Exception {
decoder.offer(part3);
decoder.offer(part4);
}
+
+ // See https://github.com/netty/netty/issues/3326
+ @Test
+ public void testFilenameContainingSemicolon() throws Exception {
+ final String boundary = "dLV9Wyq26L_-JQxk6ferf-RT153LhOO";
+ final DefaultFullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.POST,
+ "http://localhost");
+ req.headers().add(HttpHeaders.Names.CONTENT_TYPE, "multipart/form-data; boundary=" + boundary);
+ // Force to use memory-based data.
+ final DefaultHttpDataFactory inMemoryFactory = new DefaultHttpDataFactory(false);
+ final String data = "asdf";
+ final String filename = "tmp;0.txt";
+ final String body =
+ "--" + boundary + "\r\n" +
+ "Content-Disposition: form-data; name=\"file\"; filename=\"" + filename + "\"\r\n" +
+ "Content-Type: image/gif\r\n" +
+ "\r\n" +
+ data + "\r\n" +
+ "--" + boundary + "--\r\n";
+
+ req.content().writeBytes(body.getBytes(CharsetUtil.UTF_8.name()));
+ // Create decoder instance to test.
+ final HttpPostRequestDecoder decoder = new HttpPostRequestDecoder(inMemoryFactory, req);
+ assertFalse(decoder.getBodyHttpDatas().isEmpty());
+ decoder.destroy();
+ }
}
| train | train | 2015-01-13T10:14:47 | 2015-01-13T09:28:39Z | dittos | val |
netty/netty/3369_3371 | netty/netty | netty/netty/3369 | netty/netty/3371 | [
"timestamp(timedelta=44275.0, similarity=0.9544150316866737)"
] | 201d9ed9badaa70c941943268d974e61c4e84e9d | c0fdfc57689f1344eef7d503adb62f9d4dc5f8cd | [
"@sammychen105 sounds legit... would you mind open a PR with a fix ?\n",
"@normanmaurer ok, a PR(#3371) has been submit.\n",
"Fixed by #3371\n"
] | [] | 2015-01-29T08:30:50Z | [
"defect"
] | invalid refernece decrement in AbstractNioByteChannel | my test server is stop working today, and when i search the logs, i found the following exception:
`<[WARN ]2015-01-29 13:49:50,654,[NioEventLoop], Unexpected exception in the selector loop.
io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1
at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:102) ~[netty-all-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:78) ~[netty-all-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:157) ~[netty-all-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) ~[netty-all-4.0.19.Final.jar:4.0.19.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464) ~[netty-all-4.0.19.Final.jar:4.0.19.Final]>`
it seems a bug in netty, so i dig into the code.
in AbstractNioByteChannel$NioByteUnsafe.read
``` java
do {
byteBuf = allocator.ioBuffer(byteBufCapacity);
int writable = byteBuf.writableBytes();
int localReadAmount = doReadBytes(byteBuf);
if (localReadAmount <= 0) {
// not was read release the buffer
** byteBuf.release(); **
close = localReadAmount < 0;
break;
}
```
and in handleReadException()
``` java
private void handleReadException(ChannelPipeline pipeline,
ByteBuf byteBuf, Throwable cause, boolean close) {
if (byteBuf != null) {
if (byteBuf.isReadable()) {
setReadPending(false);
pipeline.fireChannelRead(byteBuf);
} else {
** byteBuf.release(); **
}
}
pipeline.fireChannelReadComplete();
pipeline.fireExceptionCaught(cause);
if (close || cause instanceof IOException) {
closeOnRead(pipeline);
}
}
```
we should set the byteBuf to null immediately after it was released otherwise it **may** be double freed. one possible fix is
``` java
if (localReadAmount <= 0) {
// not was read release the buffer
byteBuf.release();
byteBuf = null;
close = localReadAmount < 0;
break;
}
```
| [
"transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java"
] | [
"transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java b/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java
index ccf8f54a821..b4e5b190498 100644
--- a/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java
+++ b/transport/src/main/java/io/netty/channel/nio/AbstractNioByteChannel.java
@@ -116,6 +116,7 @@ public void read() {
if (localReadAmount <= 0) {
// not was read release the buffer
byteBuf.release();
+ byteBuf = null;
close = localReadAmount < 0;
break;
}
| null | train | train | 2015-01-27T07:07:18 | 2015-01-29T06:19:40Z | sammychen105 | val |
netty/netty/3362_3379 | netty/netty | netty/netty/3362 | netty/netty/3379 | [
"timestamp(timedelta=67.0, similarity=0.9396625537231835)"
] | 6b8ec6b781b3413c3aa98c3e79a562598be83d2d | 5de6e1c517bb8d4f96949e9ff55f07b492f59d46 | [
"That does sound a bit fishy...I can't give history for why this was done but I'm guessing the intention was to send something to the other end (as opposed to relying on timeouts or other failure detection mechanisms)? No matter what the intention was I agree based upon your description it seems like we are not me... | [] | 2015-01-30T07:11:34Z | [] | Possible wrong behavior in `HttpResponseDecoder`/`HttpRequestDecoder` for large header/initline/content | ##### Netty Version
4.0, 4.1, 5.0
##### Description
[`HttpResponseDecoder`](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java#L118) and [`HttpRequestDecoder`](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java#L87) in the event when the max configured sizes for HTTP initial line, headers or content is breached, sends a `DefaultHttpResponse` and `DefaultHttpRequest` respectively. After this [`HttpObjectDecoder`](https://github.com/netty/netty/blob/master/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java#L369) gets into `BAD_MESSAGE` state and ignores any other data received on this connection.
The combination of the above two behaviors, means that the decoded response/request are not complete (absence of sending `LastHTTPContent`). So, any code, waiting for a complete message will have to additionally check for decoder result to follow the correct semantics of HTTP.
##### Question
Is there a reason why we do not want to raise an exception in the pipeline instead of propagating an invalid HTTP message?
If yes, then, should we send a `DefaultFullHttpResponse`/`DefaultFullHttpRequest` instead so atleast the semantics of a complete HTTP request/response is honored?
I can send a PR for this if I understand what should be the correct behavior here.
_PS: If the issue description is not clear, I can create a quick test for it._
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java",
"codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspRequestDecoder.java",
"codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspResponseDec... | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java",
"codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java",
"codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspRequestDecoder.java",
"codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspResponseDec... | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java
index bae740864f5..0fb0339e8ff 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java
@@ -84,7 +84,7 @@ protected HttpMessage createMessage(String[] initialLine) throws Exception {
@Override
protected HttpMessage createInvalidMessage() {
- return new DefaultHttpRequest(HttpVersion.HTTP_1_0, HttpMethod.GET, "/bad-request", validateHeaders);
+ return new DefaultFullHttpRequest(HttpVersion.HTTP_1_0, HttpMethod.GET, "/bad-request", validateHeaders);
}
@Override
diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java
index 982f98286b1..bbb747057b3 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java
@@ -115,7 +115,7 @@ protected HttpMessage createMessage(String[] initialLine) {
@Override
protected HttpMessage createInvalidMessage() {
- return new DefaultHttpResponse(HttpVersion.HTTP_1_0, UNKNOWN_STATUS, validateHeaders);
+ return new DefaultFullHttpResponse(HttpVersion.HTTP_1_0, UNKNOWN_STATUS, validateHeaders);
}
@Override
diff --git a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspRequestDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspRequestDecoder.java
index 2222ee91c5c..6be4d403a68 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspRequestDecoder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspRequestDecoder.java
@@ -17,6 +17,7 @@
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.TooLongFrameException;
+import io.netty.handler.codec.http.DefaultFullHttpRequest;
import io.netty.handler.codec.http.DefaultHttpRequest;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
@@ -78,7 +79,7 @@ protected HttpMessage createMessage(String[] initialLine) throws Exception {
@Override
protected HttpMessage createInvalidMessage() {
- return new DefaultHttpRequest(RtspVersions.RTSP_1_0, RtspMethods.OPTIONS, "/bad-request", validateHeaders);
+ return new DefaultFullHttpRequest(RtspVersions.RTSP_1_0, RtspMethods.OPTIONS, "/bad-request", validateHeaders);
}
@Override
diff --git a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspResponseDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspResponseDecoder.java
index cdf49902a43..0234c6773b7 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspResponseDecoder.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/rtsp/RtspResponseDecoder.java
@@ -17,6 +17,7 @@
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.TooLongFrameException;
+import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.DefaultHttpResponse;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpResponse;
@@ -83,7 +84,7 @@ protected HttpMessage createMessage(String[] initialLine) throws Exception {
@Override
protected HttpMessage createInvalidMessage() {
- return new DefaultHttpResponse(RtspVersions.RTSP_1_0, UNKNOWN_STATUS, validateHeaders);
+ return new DefaultFullHttpResponse(RtspVersions.RTSP_1_0, UNKNOWN_STATUS, validateHeaders);
}
@Override
| null | train | train | 2015-01-30T06:51:34 | 2015-01-27T02:37:03Z | NiteshKant | val |
netty/netty/3382_3386 | netty/netty | netty/netty/3382 | netty/netty/3386 | [
"timestamp(timedelta=103.0, similarity=0.9495401364451412)"
] | cabecee127264e53acaa093ca45d3900b138b881 | f630793ec27d13591adfb0185c687311a6280af5 | [
"@nmittler - FYI.\n",
"Great initiative.\n",
"@Scottmitch good idea! I'll start looking at this today.\n",
"@nmittler - Thanks for taking care of this!\n",
"@Scottmitch After playing with this a bit locally, I believe we can get rid of all of these with the exception of `isResetSent`. We need to know if w... | [
"nit: Auto format?\n",
"I wanted to leave a comment on [line 233](https://github.com/netty/netty/pull/3386/files#diff-179f52a9de82166ed577701a1412b5a0R233)...but too far from modifications. If the stream state is HALF_CLOSED_REMOTE does it matter if we sent reset or not? The remote side shouldn't be sending dat... | 2015-02-02T18:25:33Z | [
"improvement"
] | Http2Stream state consolidation | Http2Stream has a few methods which represent state. There is the [state()](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java#L46) method which should be checked according to the rules defined in the HTTP/2 spec. There are also some other methods which represent state:
1. [isEndOfStreamReceived](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java#L83)
2. [isEndOfStreamSent](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java#L83)
3. [isResetReceived](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java#L106)
4. [isResetSent](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java#L117)
5. [isReset](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java#L129)
We should investigate if these are called in the correct places, and if they are all required. For example is `isEndOfStreamSent` necessary if we change the stream's state synchronously in the [writeData](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java#L147) or [writeHeaders](https://github.com/netty/netty/blob/master/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java#L210) method?
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/ha... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/ha... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 3f72d5743e9..bc3f6d8129a 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -139,6 +139,11 @@ public Set<Http2Stream> activeStreams() {
return Collections.unmodifiableSet(activeStreams);
}
+ @Override
+ public void deactivate(Http2Stream stream) {
+ deactivateInternal((DefaultStream) stream);
+ }
+
@Override
public Endpoint<Http2LocalFlowController> local() {
return localEndpoint;
@@ -195,7 +200,7 @@ private void removeStream(DefaultStream stream) {
stream.parent().removeChild(stream);
}
- private void activate(DefaultStream stream) {
+ private void activateInternal(DefaultStream stream) {
if (activeStreams.add(stream)) {
// Update the number of active streams initiated by the endpoint.
stream.createdBy().numActiveStreams++;
@@ -207,6 +212,21 @@ private void activate(DefaultStream stream) {
}
}
+ private void deactivateInternal(DefaultStream stream) {
+ if (activeStreams.remove(stream)) {
+ // Update the number of active streams initiated by the endpoint.
+ stream.createdBy().numActiveStreams--;
+
+ // Notify the listeners.
+ for (Listener listener : listeners) {
+ listener.streamInactive(stream);
+ }
+
+ // Mark this stream for removal.
+ removalPolicy.markForRemoval(stream);
+ }
+ }
+
/**
* Simple stream implementation. Streams can be compared to each other by priority.
*/
@@ -218,9 +238,6 @@ private class DefaultStream implements Http2Stream {
private IntObjectMap<DefaultStream> children = newChildMap();
private int totalChildWeights;
private boolean resetSent;
- private boolean resetReceived;
- private boolean endOfStreamSent;
- private boolean endOfStreamReceived;
private PropertyMap data;
DefaultStream(int id) {
@@ -238,39 +255,6 @@ public final State state() {
return state;
}
- @Override
- public boolean isEndOfStreamReceived() {
- return endOfStreamReceived;
- }
-
- @Override
- public Http2Stream endOfStreamReceived() {
- endOfStreamReceived = true;
- return this;
- }
-
- @Override
- public boolean isEndOfStreamSent() {
- return endOfStreamSent;
- }
-
- @Override
- public Http2Stream endOfStreamSent() {
- endOfStreamSent = true;
- return this;
- }
-
- @Override
- public boolean isResetReceived() {
- return resetReceived;
- }
-
- @Override
- public Http2Stream resetReceived() {
- resetReceived = true;
- return this;
- }
-
@Override
public boolean isResetSent() {
return resetSent;
@@ -282,11 +266,6 @@ public Http2Stream resetSent() {
return this;
}
- @Override
- public boolean isReset() {
- return resetSent || resetReceived;
- }
-
@Override
public Object setProperty(Object key, Object value) {
return data.put(key, value);
@@ -409,7 +388,7 @@ public Http2Stream open(boolean halfClosed) throws Http2Exception {
throw streamError(id, PROTOCOL_ERROR, "Attempting to open a stream in an invalid state: " + state);
}
- activate(this);
+ activateInternal(this);
return this;
}
@@ -420,25 +399,10 @@ public Http2Stream close() {
}
state = CLOSED;
- deactivate(this);
-
- // Mark this stream for removal.
- removalPolicy.markForRemoval(this);
+ deactivateInternal(this);
return this;
}
- private void deactivate(DefaultStream stream) {
- if (activeStreams.remove(stream)) {
- // Update the number of active streams initiated by the endpoint.
- stream.createdBy().numActiveStreams--;
-
- // Notify the listeners.
- for (Listener listener : listeners) {
- listener.streamInactive(stream);
- }
- }
- }
-
@Override
public Http2Stream closeLocalSide() {
switch (state) {
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index 521087eae11..f2406b2ec15 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -27,12 +27,13 @@
import java.util.List;
/**
- * Provides the default implementation for processing inbound frame events
- * and delegates to a {@link Http2FrameListener}
+ * Provides the default implementation for processing inbound frame events and delegates to a
+ * {@link Http2FrameListener}
* <p>
* This class will read HTTP/2 frames and delegate the events to a {@link Http2FrameListener}
* <p>
- * This interface enforces inbound flow control functionality through {@link Http2InboundFlowController}
+ * This interface enforces inbound flow control functionality through
+ * {@link Http2LocalFlowController}
*/
public class DefaultHttp2ConnectionDecoder implements Http2ConnectionDecoder {
private final Http2FrameListener internalFrameListener = new FrameReadListener();
@@ -215,25 +216,22 @@ public int onDataRead(final ChannelHandlerContext ctx, int streamId, ByteBuf dat
// Check if we received a data frame for a stream which is half-closed
Http2Stream stream = connection.requireStream(streamId);
- verifyEndOfStreamNotReceived(stream);
verifyGoAwayNotReceived();
- verifyRstStreamNotReceived(stream);
// We should ignore this frame if RST_STREAM was sent or if GO_AWAY was sent with a
// lower stream ID.
- boolean shouldApplyFlowControl = false;
boolean shouldIgnore = shouldIgnoreFrame(stream, false);
Http2Exception error = null;
switch (stream.state()) {
case OPEN:
case HALF_CLOSED_LOCAL:
- shouldApplyFlowControl = true;
break;
case HALF_CLOSED_REMOTE:
+ // Always fail the stream if we've more data after the remote endpoint half-closed.
+ error = streamError(stream.id(), STREAM_CLOSED, "Stream %d in unexpected state: %s",
+ stream.id(), stream.state());
+ break;
case CLOSED:
- if (stream.isResetSent()) {
- shouldApplyFlowControl = true;
- }
if (!shouldIgnore) {
error = streamError(stream.id(), STREAM_CLOSED, "Stream %d in unexpected state: %s",
stream.id(), stream.state());
@@ -252,11 +250,9 @@ public int onDataRead(final ChannelHandlerContext ctx, int streamId, ByteBuf dat
Http2LocalFlowController flowController = flowController();
try {
// If we should apply flow control, do so now.
- if (shouldApplyFlowControl) {
- flowController.receiveFlowControlledFrame(ctx, stream, data, padding, endOfStream);
- // Update the unconsumed bytes after flow control is applied.
- unconsumedBytes = unconsumedBytes(stream);
- }
+ flowController.receiveFlowControlledFrame(ctx, stream, data, padding, endOfStream);
+ // Update the unconsumed bytes after flow control is applied.
+ unconsumedBytes = unconsumedBytes(stream);
// If we should ignore this frame, do so now.
if (shouldIgnore) {
@@ -288,12 +284,11 @@ public int onDataRead(final ChannelHandlerContext ctx, int streamId, ByteBuf dat
throw e;
} finally {
// If appropriate, returned the processed bytes to the flow controller.
- if (shouldApplyFlowControl && bytesToReturn > 0) {
+ if (bytesToReturn > 0) {
flowController.consumeBytes(ctx, stream, bytesToReturn);
}
if (endOfStream) {
- stream.endOfStreamReceived();
lifecycleManager.closeRemoteSide(stream, ctx.newSucceededFuture());
}
}
@@ -321,7 +316,6 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
Http2Stream stream = connection.stream(streamId);
verifyGoAwayNotReceived();
- verifyRstStreamNotReceived(stream);
if (shouldIgnoreFrame(stream, false)) {
// Ignore this frame.
return;
@@ -330,8 +324,6 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
if (stream == null) {
stream = connection.createRemoteStream(streamId).open(endOfStream);
} else {
- verifyEndOfStreamNotReceived(stream);
-
switch (stream.state()) {
case RESERVED_REMOTE:
case IDLE:
@@ -360,7 +352,6 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
// If the headers completes this stream, close it.
if (endOfStream) {
- stream.endOfStreamReceived();
lifecycleManager.closeRemoteSide(stream, ctx.newSucceededFuture());
}
}
@@ -395,14 +386,11 @@ public void onRstStreamRead(ChannelHandlerContext ctx, int streamId, long errorC
verifyPrefaceReceived();
Http2Stream stream = connection.requireStream(streamId);
- verifyRstStreamNotReceived(stream);
if (stream.state() == CLOSED) {
// RstStream frames must be ignored for closed streams.
return;
}
- stream.resetReceived();
-
listener.onRstStreamRead(ctx, streamId, errorCode);
lifecycleManager.closeStream(stream, ctx.newSucceededFuture());
@@ -503,12 +491,23 @@ public void onPushPromiseRead(ChannelHandlerContext ctx, int streamId, int promi
Http2Stream parentStream = connection.requireStream(streamId);
verifyGoAwayNotReceived();
- verifyRstStreamNotReceived(parentStream);
if (shouldIgnoreFrame(parentStream, false)) {
// Ignore frames for any stream created after we sent a go-away.
return;
}
+ switch (parentStream.state()) {
+ case OPEN:
+ case HALF_CLOSED_LOCAL:
+ // Allowed to receive push promise in these states.
+ break;
+ default:
+ // Connection error.
+ throw connectionError(PROTOCOL_ERROR,
+ "Stream %d in unexpected state for receiving push promise: %s",
+ parentStream.id(), parentStream.state());
+ }
+
// Reserve the push stream based with a priority based on the current stream's priority.
connection.remote().reservePushStream(promisedStreamId, parentStream);
@@ -531,7 +530,6 @@ public void onWindowUpdateRead(ChannelHandlerContext ctx, int streamId, int wind
Http2Stream stream = connection.requireStream(streamId);
verifyGoAwayNotReceived();
- verifyRstStreamNotReceived(stream);
if (stream.state() == CLOSED || shouldIgnoreFrame(stream, false)) {
// Ignore frames for any stream created after we sent a go-away.
return;
@@ -565,18 +563,6 @@ private boolean shouldIgnoreFrame(Http2Stream stream, boolean allowResetSent) {
return stream != null && !allowResetSent && stream.isResetSent();
}
- /**
- * Verifies that a frame has not been received from remote endpoint with the
- * {@code END_STREAM} flag set. If it was, throws a connection error.
- */
- private void verifyEndOfStreamNotReceived(Http2Stream stream) throws Http2Exception {
- if (stream.isEndOfStreamReceived()) {
- // Connection error.
- throw new Http2Exception(STREAM_CLOSED, String.format(
- "Received frame for stream %d after receiving END_STREAM", stream.id()));
- }
- }
-
/**
* Verifies that a GO_AWAY frame was not previously received from the remote endpoint. If it was, throws a
* connection error.
@@ -587,17 +573,5 @@ private void verifyGoAwayNotReceived() throws Http2Exception {
throw connectionError(PROTOCOL_ERROR, "Received frames after receiving GO_AWAY");
}
}
-
- /**
- * Verifies that a RST_STREAM frame was not previously received for the given stream. If it was, throws a
- * stream error.
- */
- private void verifyRstStreamNotReceived(Http2Stream stream) throws Http2Exception {
- if (stream != null && stream.isResetReceived()) {
- // Stream error.
- throw streamError(stream.id(), STREAM_CLOSED,
- "Frame received after receiving RST_STREAM for stream: " + stream.id());
- }
- }
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index 4281c4d41c0..abff8b48cde 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -155,12 +155,6 @@ public ChannelFuture writeData(final ChannelHandlerContext ctx, final int stream
}
stream = connection.requireStream(streamId);
- if (stream.isResetSent()) {
- throw new IllegalStateException("Sending data after sending RST_STREAM.");
- }
- if (stream.isEndOfStreamSent()) {
- throw new IllegalStateException("Sending data after sending END_STREAM.");
- }
// Verify that the stream is in the appropriate state for sending DATA frames.
switch (stream.state()) {
@@ -174,8 +168,7 @@ public ChannelFuture writeData(final ChannelHandlerContext ctx, final int stream
}
if (endOfStream) {
- // Indicate that we have sent END_STREAM.
- stream.endOfStreamSent();
+ lifecycleManager.closeLocalSide(stream, promise);
}
} catch (Throwable e) {
data.release();
@@ -206,10 +199,6 @@ public ChannelFuture writeHeaders(final ChannelHandlerContext ctx, final int str
Http2Stream stream = connection.stream(streamId);
if (stream == null) {
stream = connection.createLocalStream(streamId);
- } else if (stream.isResetSent()) {
- throw new IllegalStateException("Sending headers after sending RST_STREAM.");
- } else if (stream.isEndOfStreamSent()) {
- throw new IllegalStateException("Sending headers after sending END_STREAM.");
}
switch (stream.state()) {
@@ -231,9 +220,7 @@ public ChannelFuture writeHeaders(final ChannelHandlerContext ctx, final int str
new FlowControlledHeaders(ctx, stream, headers, streamDependency, weight,
exclusive, padding, endOfStream, promise));
if (endOfStream) {
- // Flag delivery of EOS synchronously to prevent subsequent frames being enqueued in the flow
- // controller.
- stream.endOfStreamSent();
+ lifecycleManager.closeLocalSide(stream, promise);
}
return promise;
} catch (Http2NoMoreStreamIdsException e) {
@@ -556,10 +543,6 @@ public FlowControlledBase(final ChannelHandlerContext ctx, final Http2Stream str
@Override
public void operationComplete(ChannelFuture future) throws Exception {
- if (future == promise && endOfStream) {
- // Special case where we're listening to the original promise and need to close the stream.
- lifecycleManager.closeLocalSide(stream, promise);
- }
if (!future.isSuccess()) {
error(future.cause());
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index ebb686741ee..810d0d36423 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -71,15 +71,6 @@ public void streamActive(Http2Stream stream) {
state(stream).window(initialWindowSize);
}
- @Override
- public void streamHalfClosed(Http2Stream stream) {
- if (!stream.localSideOpen()) {
- // Any pending frames can never be written, clear and
- // write errors for any pending frames.
- state(stream).clear();
- }
- }
-
@Override
public void streamInactive(Http2Stream stream) {
// Any pending frames can never be written, clear and
@@ -212,7 +203,7 @@ private void flush() {
/**
* Writes as many pending bytes as possible, according to stream priority.
*/
- private void writePendingBytes() throws Http2Exception {
+ private void writePendingBytes() {
Http2Stream connectionStream = connection.connectionStream();
int connectionWindow = state(connectionStream).window();
@@ -390,10 +381,11 @@ int writableWindow() {
}
/**
- * Returns the number of pending bytes for this node that will fit within the {@link #window}. This is used for
- * the priority algorithm to determine the aggregate total for {@link #priorityBytes} at each node. Each node
- * only takes into account it's stream window so that when a change occurs to the connection window, these
- * values need not change (i.e. no tree traversal is required).
+ * Returns the number of pending bytes for this node that will fit within the
+ * {@link #window}. This is used for the priority algorithm to determine the aggregate
+ * number of bytes that can be written at each node. Each node only takes into account its
+ * stream window so that when a change occurs to the connection window, these values need
+ * not change (i.e. no tree traversal is required).
*/
int streamableBytes() {
return max(0, min(pendingBytes, window));
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
index 0cd80c4e46b..a4207077368 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
@@ -121,9 +121,8 @@ interface Endpoint<F extends Http2FlowController> {
* <li>The connection is marked as going away.</li>
* </ul>
* <p>
- * The caller is expected to {@link Http2Stream#open()} the stream.
+ * The caller is expected to {@link Http2Stream#open(boolean)} the stream.
* @param streamId The ID of the stream
- * @see Http2Stream#open()
* @see Http2Stream#open(boolean)
*/
Http2Stream createStream(int streamId) throws Http2Exception;
@@ -232,16 +231,26 @@ interface Endpoint<F extends Http2FlowController> {
Http2Stream connectionStream();
/**
- * Gets the number of streams that are currently either open or half-closed.
+ * Gets the number of streams that actively in use. It is possible for a stream to be closed
+ * but still be considered active (e.g. there is still pending data to be written).
*/
int numActiveStreams();
/**
- * Gets all streams that are currently either open or half-closed. The returned collection is
+ * Gets all streams that are actively in use. The returned collection is
* sorted by priority.
*/
Collection<Http2Stream> activeStreams();
+ /**
+ * Indicates that the given stream is no longer actively in use. If this stream was active,
+ * after calling this method it will no longer appear in the list returned by
+ * {@link #activeStreams()} and {@link #numActiveStreams()} will be decremented. In addition,
+ * all listeners will be notified of this event via
+ * {@link Listener#streamInactive(Http2Stream)}.
+ */
+ void deactivate(Http2Stream stream);
+
/**
* Indicates whether or not the local endpoint for this connection is the server.
*/
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 63e539df8a4..f5fc5dc6fd6 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -23,6 +23,7 @@
import static io.netty.handler.codec.http2.Http2Exception.connectionError;
import static io.netty.handler.codec.http2.Http2Exception.isStreamError;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
@@ -36,12 +37,13 @@
import java.util.List;
/**
- * Provides the default implementation for processing inbound frame events
- * and delegates to a {@link Http2FrameListener}
+ * Provides the default implementation for processing inbound frame events and delegates to a
+ * {@link Http2FrameListener}
* <p>
* This class will read HTTP/2 frames and delegate the events to a {@link Http2FrameListener}
* <p>
- * This interface enforces inbound flow control functionality through {@link Http2InboundFlowController}
+ * This interface enforces inbound flow control functionality through
+ * {@link Http2LocalFlowController}
*/
public class Http2ConnectionHandler extends ByteToMessageDecoder implements Http2LifecycleManager {
private final Http2ConnectionDecoder decoder;
@@ -254,14 +256,22 @@ public void closeRemoteSide(Http2Stream stream, ChannelFuture future) {
* @param future the future after which to close the channel.
*/
@Override
- public void closeStream(Http2Stream stream, ChannelFuture future) {
+ public void closeStream(final Http2Stream stream, ChannelFuture future) {
stream.close();
- // If this connection is closing and there are no longer any
- // active streams, close after the current operation completes.
- if (closeListener != null && connection().numActiveStreams() == 0) {
- future.addListener(closeListener);
- }
+ future.addListener(new ChannelFutureListener() {
+ @Override
+ public void operationComplete(ChannelFuture future) throws Exception {
+ // Deactivate this stream.
+ connection().deactivate(stream);
+
+ // If this connection is closing and there are no longer any
+ // active streams, close after the current operation completes.
+ if (closeListener != null && connection().numActiveStreams() == 0) {
+ closeListener.operationComplete(future);
+ }
+ }
+ });
}
/**
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
index 0639c4ca309..082d734ef4d 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
@@ -27,7 +27,7 @@ public interface Http2LifecycleManager {
/**
* Closes the local side of the given stream. If this causes the stream to be closed, adds a
- * hook to close the channel after the given future completes.
+ * hook to deactivate the stream and close the channel after the given future completes.
*
* @param stream the stream to be half closed.
* @param future If closing, the future after which to close the channel.
@@ -36,7 +36,7 @@ public interface Http2LifecycleManager {
/**
* Closes the remote side of the given stream. If this causes the stream to be closed, adds a
- * hook to close the channel after the given future completes.
+ * hook to deactivate the stream and close the channel after the given future completes.
*
* @param stream the stream to be half closed.
* @param future If closing, the future after which to close the channel.
@@ -44,8 +44,8 @@ public interface Http2LifecycleManager {
void closeRemoteSide(Http2Stream stream, ChannelFuture future);
/**
- * Closes the given stream and adds a hook to close the channel after the given future
- * completes.
+ * Closes the given stream and adds a hook to deactivate the stream and close the channel after
+ * the given future completes.
*
* @param stream the stream to be closed.
* @param future the future after which to close the channel.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
index 6d2481facd8..676213d341c 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
@@ -76,41 +76,6 @@ enum State {
*/
Http2Stream closeRemoteSide();
- /**
- * Indicates whether a frame with {@code END_STREAM} set was received from the remote endpoint
- * for this stream.
- */
- boolean isEndOfStreamReceived();
-
- /**
- * Sets the flag indicating that a frame with {@code END_STREAM} set was received from the
- * remote endpoint for this stream.
- */
- Http2Stream endOfStreamReceived();
-
- /**
- * Indicates whether a frame with {@code END_STREAM} set was sent to the remote endpoint for
- * this stream.
- */
- boolean isEndOfStreamSent();
-
- /**
- * Sets the flag indicating that a frame with {@code END_STREAM} set was sent to the remote
- * endpoint for this stream.
- */
- Http2Stream endOfStreamSent();
-
- /**
- * Indicates whether a {@code RST_STREAM} frame has been received from the remote endpoint for this stream.
- */
- boolean isResetReceived();
-
- /**
- * Sets the flag indicating that a {@code RST_STREAM} frame has been received from the remote endpoint
- * for this stream. This does not affect the stream state.
- */
- Http2Stream resetReceived();
-
/**
* Indicates whether a {@code RST_STREAM} frame has been sent from the local endpoint for this stream.
*/
@@ -122,12 +87,6 @@ enum State {
*/
Http2Stream resetSent();
- /**
- * Indicates whether or not this stream has been reset. This is a short form for
- * {@link #isResetSent()} || {@link #isResetReceived()}.
- */
- boolean isReset();
-
/**
* Indicates whether the remote side of this stream is open (i.e. the state is either
* {@link State#OPEN} or {@link State#HALF_CLOSED_LOCAL}).
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index 1eb3926cb4e..0737811c378 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -225,7 +225,7 @@ public void dataReadAfterGoAwayForStreamInInvalidStateShouldIgnore() throws Exce
final ByteBuf data = dummyData();
try {
decode().onDataRead(ctx, STREAM_ID, data, 10, true);
- verify(localFlow, never()).receiveFlowControlledFrame(eq(ctx), eq(stream), eq(data), eq(10), eq(true));
+ verify(localFlow).receiveFlowControlledFrame(eq(ctx), eq(stream), eq(data), eq(10), eq(true));
verify(listener, never()).onDataRead(eq(ctx), anyInt(), any(ByteBuf.class), anyInt(), anyBoolean());
} finally {
data.release();
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
index 38348977422..49bd07375e7 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
@@ -39,6 +39,7 @@
import static org.mockito.Mockito.reset;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
+
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.buffer.UnpooledByteBufAllocator;
@@ -50,11 +51,6 @@
import io.netty.channel.DefaultChannelPromise;
import io.netty.util.concurrent.ImmediateEventExecutor;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.concurrent.atomic.AtomicBoolean;
-
import org.junit.Before;
import org.junit.Test;
import org.mockito.ArgumentCaptor;
@@ -63,6 +59,10 @@
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
/**
* Tests for {@link DefaultHttp2ConnectionEncoder}
*/
@@ -239,7 +239,7 @@ public void dataWriteShouldHalfCloseStream() throws Exception {
}
@Test
- public void dataLargerThanMaxFrameSizeShouldBeSplit() throws Http2Exception {
+ public void dataLargerThanMaxFrameSizeShouldBeSplit() {
when(frameSizePolicy.maxFrameSize()).thenReturn(3);
final ByteBuf data = dummyData();
encoder.writeData(ctx, STREAM_ID, data, 0, true, promise);
@@ -254,7 +254,7 @@ public void dataLargerThanMaxFrameSizeShouldBeSplit() throws Http2Exception {
}
@Test
- public void paddingSplitOverFrame() throws Http2Exception {
+ public void paddingSplitOverFrame() {
when(frameSizePolicy.maxFrameSize()).thenReturn(5);
final ByteBuf data = dummyData();
encoder.writeData(ctx, STREAM_ID, data, 5, true, promise);
@@ -272,7 +272,7 @@ public void paddingSplitOverFrame() throws Http2Exception {
}
@Test
- public void frameShouldSplitPadding() throws Http2Exception {
+ public void frameShouldSplitPadding() {
when(frameSizePolicy.maxFrameSize()).thenReturn(5);
ByteBuf data = dummyData();
encoder.writeData(ctx, STREAM_ID, data, 10, true, promise);
@@ -292,18 +292,18 @@ public void frameShouldSplitPadding() throws Http2Exception {
}
@Test
- public void emptyFrameShouldSplitPadding() throws Http2Exception {
+ public void emptyFrameShouldSplitPadding() {
ByteBuf data = Unpooled.buffer(0);
assertSplitPaddingOnEmptyBuffer(data);
assertEquals(0, data.refCnt());
}
@Test
- public void singletonEmptyBufferShouldSplitPadding() throws Http2Exception {
+ public void singletonEmptyBufferShouldSplitPadding() {
assertSplitPaddingOnEmptyBuffer(Unpooled.EMPTY_BUFFER);
}
- private void assertSplitPaddingOnEmptyBuffer(ByteBuf data) throws Http2Exception {
+ private void assertSplitPaddingOnEmptyBuffer(ByteBuf data) {
when(frameSizePolicy.maxFrameSize()).thenReturn(5);
encoder.writeData(ctx, STREAM_ID, data, 10, true, promise);
assertEquals(payloadCaptor.getValue().size(), 10);
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
index 691364470e3..1e324426d37 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
@@ -273,7 +273,7 @@ private void consumeBytes(int streamId, int numBytes) throws Http2Exception {
controller.consumeBytes(ctx, stream(streamId), numBytes);
}
- private void verifyWindowUpdateSent(int streamId, int windowSizeIncrement) throws Http2Exception {
+ private void verifyWindowUpdateSent(int streamId, int windowSizeIncrement) {
verify(frameWriter).writeWindowUpdate(eq(ctx), eq(streamId), eq(windowSizeIncrement), eq(promise));
}
| train | train | 2015-02-04T20:24:09 | 2015-01-30T19:45:50Z | Scottmitch | val |
netty/netty/3331_3399 | netty/netty | netty/netty/3331 | netty/netty/3399 | [
"timestamp(timedelta=26.0, similarity=0.9581606155234135)"
] | ca8b937c14c42ee0a2be2cffaf48d3e49caad316 | 02af1735d0aa559e432b02548350a0383f1d481a | [
"@ngocdaothanh we love PR's ;)\n"
] | [] | 2015-02-06T19:54:36Z | [] | Update Javassist from 3.18.0-GA to 3.19.0-GA | Javassist 3.19.0-GA released:
https://github.com/jboss-javassist/javassist/releases
| [
"pom.xml"
] | [
"pom.xml"
] | [] | diff --git a/pom.xml b/pom.xml
index 822c5463819..59122ad6349 100644
--- a/pom.xml
+++ b/pom.xml
@@ -31,7 +31,7 @@
<name>Netty</name>
<url>http://netty.io/</url>
<description>
- Netty is an asynchronous event-driven network application framework for
+ Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.
</description>
@@ -512,7 +512,7 @@
<dependency>
<groupId>org.javassist</groupId>
<artifactId>javassist</artifactId>
- <version>3.18.0-GA</version>
+ <version>3.19.0-GA</version>
<scope>compile</scope>
<optional>true</optional>
</dependency>
@@ -636,14 +636,14 @@
</exclusions>
<optional>true</optional>
</dependency>
-
+
<!-- Metrics providers -->
<dependency>
<groupId>com.yammer.metrics</groupId>
<artifactId>metrics-core</artifactId>
<version>2.2.0</version>
</dependency>
-
+
<!-- Common test dependencies -->
<dependency>
<groupId>junit</groupId>
@@ -979,7 +979,7 @@
</configuration>
</execution>
</executions>
- </plugin>
+ </plugin>
<plugin>
<artifactId>maven-source-plugin</artifactId>
<version>2.2.1</version>
@@ -1238,7 +1238,7 @@
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.8</version>
- </plugin>
+ </plugin>
<plugin>
<groupId>org.fusesource.hawtjni</groupId>
<artifactId>maven-hawtjni-plugin</artifactId>
| null | test | train | 2015-02-06T19:31:37 | 2015-01-16T03:59:34Z | ngocdaothanh | val |
netty/netty/3066_3454 | netty/netty | netty/netty/3066 | netty/netty/3454 | [
"timestamp(timedelta=17.0, similarity=0.9305151939322108)"
] | b54857b9a0ec0456e5624b93e244b1d7a2c53ea9 | a07f4e1b09e8556ed9a3e5c55a99a76962ced3a4 | [
"I think it may also not be unregistering closed datagram sockets, but I haven't traced that.\n",
"Sounds like a bug... will investigate\n",
"Fixed... sorry for the delay.\n"
] | [] | 2015-02-27T20:23:17Z | [
"defect"
] | EpollDatagramChannel never calls fireChannelActive() after connect() | NioDatagramChannel extends AbstractNioChannel and AbstractNioUnsafe call fulfillConnectPromise(), which calls fireChannelActive()
AbstractEpollUnsafe and EpollDatagramChannel have no equivalent call, so channelActive() is never called on the pipeline.
| [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java"
] | [
"transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java"
] | [] | diff --git a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
index f3809865c08..53465608bf0 100644
--- a/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
+++ b/transport-native-epoll/src/main/java/io/netty/channel/epoll/EpollDatagramChannel.java
@@ -483,6 +483,7 @@ public void connect(SocketAddress remote, SocketAddress local, ChannelPromise ch
boolean success = false;
try {
try {
+ boolean wasActive = isActive();
InetSocketAddress remoteAddress = (InetSocketAddress) remote;
if (local != null) {
InetSocketAddress localAddress = (InetSocketAddress) local;
@@ -493,6 +494,12 @@ public void connect(SocketAddress remote, SocketAddress local, ChannelPromise ch
EpollDatagramChannel.this.remote = remoteAddress;
EpollDatagramChannel.this.local = Native.localAddress(fd().intValue());
success = true;
+
+ // Regardless if the connection attempt was cancelled, channelActive() event should be triggered,
+ // because what happened is what happened.
+ if (!wasActive && isActive()) {
+ pipeline().fireChannelActive();
+ }
} finally {
if (!success) {
doClose();
| null | val | train | 2015-02-27T20:58:03 | 2014-10-28T17:17:10Z | shevek | val |
netty/netty/3488_3489 | netty/netty | netty/netty/3488 | netty/netty/3489 | [
"timestamp(timedelta=66.0, similarity=0.9479850702428534)"
] | 08b1438e7b2b7e129eb0bd2245af6a13dc7798cd | 62be2bca838c6d1e59bfab08b583ca3a70f03842 | [
"+1. Thanks @buchgr for taking this one on!\n"
] | [
"add `final`?\n",
"I removed it because, what's the point of a private final class? Nobody can extend it anyway.\n",
"I think the package private access _maybe_ for 2 reasons:\n1. Used in tests?\n2. Avoid synthetic method generation?\n\nSo I'm not sure we need the package private access but if we could make mor... | 2015-03-11T22:48:37Z | [
"improvement"
] | Remove Frame class from DefaultHttp2RemoteFlowController. | The Frame now just contains the payload and doesn't maintain any other state. To reduce GC pressure, we should probably discard this class and move it's methods to the FlowState.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index 3cd72143f4d..62420691f51 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -166,10 +166,9 @@ public void incrementWindowSize(ChannelHandlerContext ctx, Http2Stream stream, i
}
@Override
- public void sendFlowControlled(ChannelHandlerContext ctx, Http2Stream stream,
- FlowControlled payload) {
+ public void sendFlowControlled(ChannelHandlerContext ctx, Http2Stream stream, FlowControlled frame) {
checkNotNull(ctx, "ctx");
- checkNotNull(payload, "payload");
+ checkNotNull(frame, "frame");
if (this.ctx != null && this.ctx != ctx) {
throw new IllegalArgumentException("Writing data from multiple ChannelHandlerContexts is not supported");
}
@@ -178,16 +177,16 @@ public void sendFlowControlled(ChannelHandlerContext ctx, Http2Stream stream,
FlowState state;
try {
state = state(stream);
- state.newFrame(payload);
+ state.enqueueFrame(frame);
} catch (Throwable t) {
- payload.error(t);
+ frame.error(t);
return;
}
state.writeBytes(state.writableWindow());
try {
flush();
} catch (Throwable t) {
- payload.error(t);
+ frame.error(t);
}
}
@@ -335,8 +334,8 @@ private static void writeChildNode(FlowState state) {
/**
* The outbound flow control state for a single stream.
*/
- final class FlowState {
- private final Deque<Frame> pendingWriteQueue;
+ private final class FlowState {
+ private final Deque<FlowControlled> pendingWriteQueue;
private final Http2Stream stream;
private int window;
private int pendingBytes;
@@ -350,7 +349,7 @@ final class FlowState {
FlowState(Http2Stream stream, int initialWindowSize) {
this.stream = stream;
window(initialWindowSize);
- pendingWriteQueue = new ArrayDeque<Frame>(2);
+ pendingWriteQueue = new ArrayDeque<FlowControlled>(2);
}
int window() {
@@ -424,13 +423,12 @@ int streamableBytesForTree() {
}
/**
- * Creates a new payload with the given values and immediately enqueues it.
+ * Adds the {@code frame} to the pending queue and increments the pending
+ * byte count.
*/
- Frame newFrame(FlowControlled payload) {
- // Store this as the future for the most recent write attempt.
- Frame frame = new Frame(payload);
+ void enqueueFrame(FlowControlled frame) {
+ incrementPendingBytes(frame.size());
pendingWriteQueue.offer(frame);
- return frame;
}
/**
@@ -443,7 +441,7 @@ boolean hasFrame() {
/**
* Returns the the head of the pending queue, or {@code null} if empty.
*/
- Frame peek() {
+ FlowControlled peek() {
return pendingWriteQueue.peek();
}
@@ -458,12 +456,12 @@ void cancel() {
return;
}
for (;;) {
- Frame frame = pendingWriteQueue.poll();
+ FlowControlled frame = pendingWriteQueue.poll();
if (frame == null) {
break;
}
- frame.writeError(streamError(stream.id(), INTERNAL_ERROR,
- "Stream closed before write could take place"));
+ writeError(frame, streamError(stream.id(), INTERNAL_ERROR,
+ "Stream closed before write could take place"));
}
}
@@ -476,7 +474,7 @@ int writeBytes(int bytes) {
int bytesAttempted = 0;
while (hasFrame()) {
int maxBytes = min(bytes - bytesAttempted, writableWindow());
- bytesAttempted += peek().write(maxBytes);
+ bytesAttempted += write(peek(), maxBytes);
if (bytes - bytesAttempted <= 0) {
break;
}
@@ -484,6 +482,50 @@ int writeBytes(int bytes) {
return bytesAttempted;
}
+ /**
+ * Writes the frame and decrements the stream and connection window sizes. If the frame is in the pending
+ * queue, the written bytes are removed from this branch of the priority tree.
+ * <p>
+ * Note: this does not flush the {@link ChannelHandlerContext}.
+ * </p>
+ */
+ int write(FlowControlled frame, int allowedBytes) {
+ int before = frame.size();
+ int writtenBytes = 0;
+ try {
+ assert !writing;
+
+ // Write the portion of the frame.
+ writing = true;
+ needFlush |= frame.write(Math.max(0, allowedBytes));
+ if (!cancelled && frame.size() == 0) {
+ // This frame has been fully written, remove this frame
+ // and notify it. Since we remove this frame
+ // first, we're guaranteed that its error method will not
+ // be called when we call cancel.
+ pendingWriteQueue.remove();
+ frame.writeComplete();
+ }
+ } catch (Throwable e) {
+ // Mark the state as cancelled, we'll clear the pending queue
+ // via cancel() below.
+ cancelled = true;
+ } finally {
+ writing = false;
+ // Make sure we always decrement the flow control windows
+ // by the bytes written.
+ writtenBytes = before - frame.size();
+ decrementFlowControlWindow(writtenBytes);
+ decrementPendingBytes(writtenBytes);
+ // If a cancellation occurred while writing, call cancel again to
+ // clear and error all of the pending writes.
+ if (cancelled) {
+ cancel();
+ }
+ }
+ return writtenBytes;
+ }
+
/**
* Recursively increments the streamable bytes for this branch in the priority tree starting at the current
* node.
@@ -496,103 +538,48 @@ void incrementStreamableBytesForTree(int numBytes) {
}
/**
- * A wrapper class around the content of a data frame.
+ * Increments the number of pending bytes for this node. If there was any change to the number of bytes that
+ * fit into the stream window, then {@link #incrementStreamableBytesForTree} is called to recursively update
+ * this branch of the priority tree.
*/
- private final class Frame {
- final FlowControlled payload;
-
- Frame(FlowControlled payload) {
- this.payload = payload;
- // Increment the number of pending bytes for this stream.
- incrementPendingBytes(payload.size());
- }
+ void incrementPendingBytes(int numBytes) {
+ int previouslyStreamable = streamableBytes();
+ pendingBytes += numBytes;
- /**
- * Increments the number of pending bytes for this node. If there was any change to the number of bytes that
- * fit into the stream window, then {@link #incrementStreamableBytesForTree} to recursively update this
- * branch of the priority tree.
- */
- private void incrementPendingBytes(int numBytes) {
- int previouslyStreamable = streamableBytes();
- pendingBytes += numBytes;
-
- int delta = streamableBytes() - previouslyStreamable;
- if (delta != 0) {
- incrementStreamableBytesForTree(delta);
- }
- }
-
- /**
- * Writes the frame and decrements the stream and connection window sizes. If the frame is in the pending
- * queue, the written bytes are removed from this branch of the priority tree.
- * <p>
- * Note: this does not flush the {@link ChannelHandlerContext}.
- */
- int write(int allowedBytes) {
- int before = payload.size();
- int writtenBytes = 0;
- try {
- if (writing) {
- throw new IllegalStateException("write is not re-entrant");
- }
- // Write the portion of the frame.
- writing = true;
- needFlush |= payload.write(Math.max(0, allowedBytes));
- if (!cancelled && payload.size() == 0) {
- // This frame has been fully written, remove this frame
- // and notify the payload. Since we remove this frame
- // first, we're guaranteed that its error method will not
- // be called when we call cancel.
- pendingWriteQueue.remove();
- payload.writeComplete();
- }
- } catch (Throwable e) {
- // Mark the state as cancelled, we'll clear the pending queue
- // via cancel() below.
- cancelled = true;
- } finally {
- writing = false;
- // Make sure we always decrement the flow control windows
- // by the bytes written.
- writtenBytes = before - payload.size();
- decrementFlowControlWindow(writtenBytes);
- decrementPendingBytes(writtenBytes);
- // If a cancellation occurred while writing, call cancel again to
- // clear and error all of the pending writes.
- if (cancelled) {
- cancel();
- }
- }
- return writtenBytes;
+ int delta = streamableBytes() - previouslyStreamable;
+ if (delta != 0) {
+ incrementStreamableBytesForTree(delta);
}
+ }
- /**
- * Decrement the per stream and connection flow control window by {@code bytes}.
- */
- void decrementFlowControlWindow(int bytes) {
- try {
- connectionState().incrementStreamWindow(-bytes);
- incrementStreamWindow(-bytes);
- } catch (Http2Exception e) { // Should never get here since we're decrementing.
- throw new RuntimeException("Invalid window state when writing frame: " + e.getMessage(), e);
- }
- }
+ /**
+ * If this frame is in the pending queue, decrements the number of pending bytes for the stream.
+ */
+ void decrementPendingBytes(int bytes) {
+ incrementPendingBytes(-bytes);
+ }
- /**
- * Discards this frame, writing an error. If this frame is in the pending queue, the unwritten bytes are
- * removed from this branch of the priority tree.
- */
- void writeError(Http2Exception cause) {
- decrementPendingBytes(payload.size());
- payload.error(cause);
+ /**
+ * Decrement the per stream and connection flow control window by {@code bytes}.
+ */
+ void decrementFlowControlWindow(int bytes) {
+ try {
+ int negativeBytes = -bytes;
+ connectionState().incrementStreamWindow(negativeBytes);
+ incrementStreamWindow(negativeBytes);
+ } catch (Http2Exception e) {
+ // Should never get here since we're decrementing.
+ throw new IllegalStateException("Invalid window state when writing frame: " + e.getMessage(), e);
}
+ }
- /**
- * If this frame is in the pending queue, decrements the number of pending bytes for the stream.
- */
- void decrementPendingBytes(int bytes) {
- incrementPendingBytes(-bytes);
- }
+ /**
+ * Discards this {@link FlowControlled}, writing an error. If this frame is in the pending queue,
+ * the unwritten bytes are removed from this branch of the priority tree.
+ */
+ void writeError(FlowControlled frame, Http2Exception cause) {
+ decrementPendingBytes(frame.size());
+ frame.error(cause);
}
}
}
| null | train | train | 2015-03-13T18:37:58 | 2015-03-11T17:19:24Z | nmittler | val |
netty/netty/3469_3491 | netty/netty | netty/netty/3469 | netty/netty/3491 | [
"timestamp(timedelta=84.0, similarity=0.8501745656004406)"
] | 93bcc6bbdef53e9249802e1bba72aca06f90ad45 | fda0e4e45f621b69d39e4675af099c3e8dcbf69e | [
"+1\n",
"Closing this since we decided to keep the current naming for the time being (in #3491)\n"
] | [] | 2015-03-12T19:54:09Z | [] | Rename HTTP/2 Stream methods indicating writability | The remoteSideOpen() and localSideOpen() are a little confusing. It would be clearer to have isWritable and isReadable.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/ha... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/ha... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java",
"codec-http2/src/test/java... | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index bc3f6d8129a..7a306e49edc 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -404,7 +404,7 @@ public Http2Stream close() {
}
@Override
- public Http2Stream closeLocalSide() {
+ public Http2Stream closeForWriting() {
switch (state) {
case OPEN:
state = HALF_CLOSED_LOCAL;
@@ -420,7 +420,7 @@ public Http2Stream closeLocalSide() {
}
@Override
- public Http2Stream closeRemoteSide() {
+ public Http2Stream closeForReading() {
switch (state) {
case OPEN:
state = HALF_CLOSED_REMOTE;
@@ -442,12 +442,12 @@ private void notifyHalfClosed(Http2Stream stream) {
}
@Override
- public final boolean remoteSideOpen() {
+ public final boolean isReadable() {
return state == HALF_CLOSED_LOCAL || state == OPEN || state == RESERVED_REMOTE;
}
@Override
- public final boolean localSideOpen() {
+ public final boolean isWritable() {
return state == HALF_CLOSED_REMOTE || state == OPEN || state == RESERVED_LOCAL;
}
@@ -671,12 +671,12 @@ public Http2Stream close() {
}
@Override
- public Http2Stream closeLocalSide() {
+ public Http2Stream closeForWriting() {
throw new UnsupportedOperationException();
}
@Override
- public Http2Stream closeRemoteSide() {
+ public Http2Stream closeForReading() {
throw new UnsupportedOperationException();
}
}
@@ -759,7 +759,7 @@ public DefaultStream reservePushStream(int streamId, Http2Stream parent) throws
if (parent == null) {
throw connectionError(PROTOCOL_ERROR, "Parent stream missing");
}
- if (isLocal() ? !parent.localSideOpen() : !parent.remoteSideOpen()) {
+ if (isLocal() ? !parent.isWritable() : !parent.isReadable()) {
throw connectionError(PROTOCOL_ERROR, "Stream %d is not open for sending push promise", parent.id());
}
if (!opposite().allowPushTo()) {
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index f2406b2ec15..a92f5e0098b 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -289,7 +289,7 @@ public int onDataRead(final ChannelHandlerContext ctx, int streamId, ByteBuf dat
}
if (endOfStream) {
- lifecycleManager.closeRemoteSide(stream, ctx.newSucceededFuture());
+ lifecycleManager.closeForReading(stream, ctx.newSucceededFuture());
}
}
}
@@ -352,7 +352,7 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
// If the headers completes this stream, close it.
if (endOfStream) {
- lifecycleManager.closeRemoteSide(stream, ctx.newSucceededFuture());
+ lifecycleManager.closeForReading(stream, ctx.newSucceededFuture());
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index 313fab02f18..48b43fafa36 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -536,7 +536,7 @@ public FlowControlledBase(final ChannelHandlerContext ctx, final Http2Stream str
@Override
public void writeComplete() {
if (endOfStream) {
- lifecycleManager.closeLocalSide(stream, promise);
+ lifecycleManager.closeForWriting(stream, promise);
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index f5fc5dc6fd6..16e34867d1d 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -208,19 +208,12 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E
}
}
- /**
- * Closes the local side of the given stream. If this causes the stream to be closed, adds a
- * hook to close the channel after the given future completes.
- *
- * @param stream the stream to be half closed.
- * @param future If closing, the future after which to close the channel.
- */
@Override
- public void closeLocalSide(Http2Stream stream, ChannelFuture future) {
+ public void closeForWriting(Http2Stream stream, ChannelFuture future) {
switch (stream.state()) {
case HALF_CLOSED_LOCAL:
case OPEN:
- stream.closeLocalSide();
+ stream.closeForWriting();
break;
default:
closeStream(stream, future);
@@ -228,19 +221,12 @@ public void closeLocalSide(Http2Stream stream, ChannelFuture future) {
}
}
- /**
- * Closes the remote side of the given stream. If this causes the stream to be closed, adds a
- * hook to close the channel after the given future completes.
- *
- * @param stream the stream to be half closed.
- * @param future If closing, the future after which to close the channel.
- */
@Override
- public void closeRemoteSide(Http2Stream stream, ChannelFuture future) {
+ public void closeForReading(Http2Stream stream, ChannelFuture future) {
switch (stream.state()) {
case HALF_CLOSED_REMOTE:
case OPEN:
- stream.closeRemoteSide();
+ stream.closeForReading();
break;
default:
closeStream(stream, future);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
index 082d734ef4d..ec13ec099ca 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
@@ -26,22 +26,22 @@
public interface Http2LifecycleManager {
/**
- * Closes the local side of the given stream. If this causes the stream to be closed, adds a
+ * Closes the given stream for writing (i.e. closes the local side). If this causes the stream to be closed, adds a
* hook to deactivate the stream and close the channel after the given future completes.
*
* @param stream the stream to be half closed.
* @param future If closing, the future after which to close the channel.
*/
- void closeLocalSide(Http2Stream stream, ChannelFuture future);
+ void closeForWriting(Http2Stream stream, ChannelFuture future);
/**
- * Closes the remote side of the given stream. If this causes the stream to be closed, adds a
+ * Closes the given stream for reading (i.e. closes the remote side). If this causes the stream to be closed, adds a
* hook to deactivate the stream and close the channel after the given future completes.
*
* @param stream the stream to be half closed.
* @param future If closing, the future after which to close the channel.
*/
- void closeRemoteSide(Http2Stream stream, ChannelFuture future);
+ void closeForReading(Http2Stream stream, ChannelFuture future);
/**
* Closes the given stream and adds a hook to deactivate the stream and close the channel after
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
index 676213d341c..55df4154f06 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
@@ -65,16 +65,14 @@ enum State {
Http2Stream close();
/**
- * Closes the local side of this stream. If this makes the stream closed, the child is closed as
- * well.
+ * Closes the this stream for writing (i.e. closes the local side).
*/
- Http2Stream closeLocalSide();
+ Http2Stream closeForWriting();
/**
- * Closes the remote side of this stream. If this makes the stream closed, the child is closed
- * as well.
+ * Closes this stream for reading (i.e. closes the remote side).
*/
- Http2Stream closeRemoteSide();
+ Http2Stream closeForReading();
/**
* Indicates whether a {@code RST_STREAM} frame has been sent from the local endpoint for this stream.
@@ -88,16 +86,16 @@ enum State {
Http2Stream resetSent();
/**
- * Indicates whether the remote side of this stream is open (i.e. the state is either
- * {@link State#OPEN} or {@link State#HALF_CLOSED_LOCAL}).
+ * Indicates whether the remote side of this stream is open, allowing the stream to be readable (i.e.
+ * the state is either {@link State#OPEN} or {@link State#HALF_CLOSED_LOCAL}).
*/
- boolean remoteSideOpen();
+ boolean isReadable();
/**
- * Indicates whether the local side of this stream is open (i.e. the state is either
- * {@link State#OPEN} or {@link State#HALF_CLOSED_REMOTE}).
+ * Indicates whether the local side of this stream is open, allowing the stream to be written (i.e.
+ * the state is either {@link State#OPEN} or {@link State#HALF_CLOSED_REMOTE}).
*/
- boolean localSideOpen();
+ boolean isWritable();
/**
* Associates the application-defined data with this stream.
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index 0737811c378..fd0a525b282 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -253,7 +253,7 @@ public void dataReadWithEndOfStreamShouldCloseRemoteSide() throws Exception {
try {
decode().onDataRead(ctx, STREAM_ID, data, 10, true);
verify(localFlow).receiveFlowControlledFrame(eq(ctx), eq(stream), eq(data), eq(10), eq(true));
- verify(lifecycleManager).closeRemoteSide(eq(stream), eq(future));
+ verify(lifecycleManager).closeForReading(eq(stream), eq(future));
verify(listener).onDataRead(eq(ctx), eq(STREAM_ID), eq(data), eq(10), eq(true));
} finally {
data.release();
@@ -296,7 +296,7 @@ public Integer answer(InvocationOnMock in) throws Throwable {
} catch (RuntimeException cause) {
verify(localFlow)
.receiveFlowControlledFrame(eq(ctx), eq(stream), eq(data), eq(padding), eq(true));
- verify(lifecycleManager).closeRemoteSide(eq(stream), eq(future));
+ verify(lifecycleManager).closeForReading(eq(stream), eq(future));
verify(listener).onDataRead(eq(ctx), eq(STREAM_ID), eq(data), eq(padding), eq(true));
assertEquals(0, localFlow.unconsumedBytes(stream));
} finally {
@@ -353,7 +353,7 @@ public void headersReadForPromisedStreamShouldCloseStream() throws Exception {
when(stream.state()).thenReturn(RESERVED_REMOTE);
decode().onHeadersRead(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, 0, true);
verify(stream).open(true);
- verify(lifecycleManager).closeRemoteSide(eq(stream), eq(future));
+ verify(lifecycleManager).closeForReading(eq(stream), eq(future));
verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(EmptyHttp2Headers.INSTANCE), eq(0),
eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(true));
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
index 45ccc991208..bd267986c03 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
@@ -36,7 +36,6 @@
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.never;
-import static org.mockito.Mockito.reset;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
@@ -351,7 +350,7 @@ public void headersWriteShouldOpenStreamForPush() throws Exception {
when(stream.state()).thenReturn(RESERVED_LOCAL);
encoder.writeHeaders(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, 0, false, promise);
verify(stream).open(false);
- verify(stream, never()).closeLocalSide();
+ verify(stream, never()).closeForWriting();
assertNotNull(payloadCaptor.getValue());
payloadCaptor.getValue().write(0);
verify(writer).writeHeaders(eq(ctx), eq(STREAM_ID), eq(EmptyHttp2Headers.INSTANCE), eq(0),
@@ -442,7 +441,7 @@ public void dataWriteShouldCreateHalfClosedStream() {
ByteBuf data = dummyData();
encoder.writeData(ctx, STREAM_ID, data.retain(), 0, true, promise);
verify(remoteFlow).sendFlowControlled(eq(ctx), eq(stream), any(FlowControlled.class));
- verify(lifecycleManager).closeLocalSide(stream, promise);
+ verify(lifecycleManager).closeForWriting(stream, promise);
assertEquals(data.toString(UTF_8), writtenData.get(0));
data.release();
}
@@ -464,7 +463,7 @@ public void headersWriteShouldHalfCloseStream() throws Exception {
// Trigger the write and mark the promise successful to trigger listeners
payloadCaptor.getValue().write(0);
promise.trySuccess();
- verify(lifecycleManager).closeLocalSide(eq(stream), eq(promise));
+ verify(lifecycleManager).closeForWriting(eq(stream), eq(promise));
}
@Test
@@ -479,7 +478,7 @@ public void headersWriteShouldHalfClosePushStream() throws Exception {
verify(stream).open(true);
promise.trySuccess();
- verify(lifecycleManager).closeLocalSide(eq(stream), eq(promise));
+ verify(lifecycleManager).closeForWriting(eq(stream), eq(promise));
}
private void mockSendFlowControlledWriteEverything() {
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index 5e394104b48..49a651b589e 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -191,7 +191,7 @@ public void closeShouldSucceed() throws Http2Exception {
@Test
public void closeLocalWhenOpenShouldSucceed() throws Http2Exception {
Http2Stream stream = server.remote().createStream(3).open(false);
- stream.closeLocalSide();
+ stream.closeForWriting();
assertEquals(State.HALF_CLOSED_LOCAL, stream.state());
assertEquals(1, server.activeStreams().size());
}
@@ -199,7 +199,7 @@ public void closeLocalWhenOpenShouldSucceed() throws Http2Exception {
@Test
public void closeRemoteWhenOpenShouldSucceed() throws Http2Exception {
Http2Stream stream = server.remote().createStream(3).open(false);
- stream.closeRemoteSide();
+ stream.closeForReading();
assertEquals(State.HALF_CLOSED_REMOTE, stream.state());
assertEquals(1, server.activeStreams().size());
}
@@ -207,7 +207,7 @@ public void closeRemoteWhenOpenShouldSucceed() throws Http2Exception {
@Test
public void closeOnlyOpenSideShouldClose() throws Http2Exception {
Http2Stream stream = server.remote().createStream(3).open(true);
- stream.closeLocalSide();
+ stream.closeForWriting();
assertEquals(State.CLOSED, stream.state());
assertTrue(server.activeStreams().isEmpty());
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
index d508e72893a..7abc89ffcea 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
@@ -35,7 +35,6 @@
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.http2.Http2FrameWriter.Configuration;
-import io.netty.handler.codec.http2.Http2RemoteFlowController.FlowControlled;
import io.netty.util.collection.IntObjectHashMap;
import io.netty.util.collection.IntObjectMap;
@@ -1001,7 +1000,7 @@ public void flowControlledWriteThrowsAnException() throws Exception {
final Http2Stream stream = stream(STREAM_A);
doAnswer(new Answer<Void>() {
public Void answer(InvocationOnMock invocationOnMock) {
- stream.closeLocalSide();
+ stream.closeForWriting();
return null;
}
}).when(flowControlled).error(any(Throwable.class));
| test | train | 2015-03-11T03:21:58 | 2015-03-06T19:02:55Z | nmittler | val |
netty/netty/3451_3498 | netty/netty | netty/netty/3451 | netty/netty/3498 | [
"timestamp(timedelta=75.0, similarity=0.91124787193252)"
] | 6e894c411b0df52618791cedc6c0a2aa29e6b5fb | b61a83b07ac54c79544e5512fbc60c2934ee92f0 | [
"@nmittler - Do you have any suggestions to improve clarity? By `direction` do you mean local/remote? The only thing that may be tough is the Endpoint interface is agnostic to direction due to it serving as both remote/local.\n",
"@Scottmitch I haven't put much thought into it yet :). For push, we do clarify di... | [
"I think we have a naming disconnect between the `activate`/`deactivate` on the stream and `concurrent` on the endpoint. For example the stream still has `numActiveStreams()` and `activeStreams()` which is logically the same thing as the endpoint's `numConcurrentStreams()` but with a different name. Can make this ... | 2015-03-15T01:42:10Z | [
"improvement"
] | Clean up HTTP/2 MAX_CONCURRENT_STREAMS logic | There are several methods on `Http2Connection.Endpoint` that involve this parameter:
`acceptingNewStreams`
`createStream`
`reservePushStream`
`maxStreams` (getter and setter)
It is not always clear which direction the value applies to. It's probably worth a pass to:
1) Make sure we're actually using it correctly in all places.
2) See if we can make the interface more clear as to the directionality of the parameter
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/ha... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/ha... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java",
"codec-http2/src/test/java... | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index bc3f6d8129a..f87e098d27a 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -139,11 +139,6 @@ public Set<Http2Stream> activeStreams() {
return Collections.unmodifiableSet(activeStreams);
}
- @Override
- public void deactivate(Http2Stream stream) {
- deactivateInternal((DefaultStream) stream);
- }
-
@Override
public Endpoint<Http2LocalFlowController> local() {
return localEndpoint;
@@ -159,16 +154,6 @@ public boolean isGoAway() {
return goAwaySent() || goAwayReceived();
}
- @Override
- public Http2Stream createLocalStream(int streamId) throws Http2Exception {
- return local().createStream(streamId);
- }
-
- @Override
- public Http2Stream createRemoteStream(int streamId) throws Http2Exception {
- return remote().createStream(streamId);
- }
-
@Override
public boolean goAwayReceived() {
return localEndpoint.lastKnownStream >= 0;
@@ -200,33 +185,6 @@ private void removeStream(DefaultStream stream) {
stream.parent().removeChild(stream);
}
- private void activateInternal(DefaultStream stream) {
- if (activeStreams.add(stream)) {
- // Update the number of active streams initiated by the endpoint.
- stream.createdBy().numActiveStreams++;
-
- // Notify the listeners.
- for (Listener listener : listeners) {
- listener.streamActive(stream);
- }
- }
- }
-
- private void deactivateInternal(DefaultStream stream) {
- if (activeStreams.remove(stream)) {
- // Update the number of active streams initiated by the endpoint.
- stream.createdBy().numActiveStreams--;
-
- // Notify the listeners.
- for (Listener listener : listeners) {
- listener.streamInactive(stream);
- }
-
- // Mark this stream for removal.
- removalPolicy.markForRemoval(stream);
- }
- }
-
/**
* Simple stream implementation. Streams can be compared to each other by priority.
*/
@@ -388,7 +346,15 @@ public Http2Stream open(boolean halfClosed) throws Http2Exception {
throw streamError(id, PROTOCOL_ERROR, "Attempting to open a stream in an invalid state: " + state);
}
- activateInternal(this);
+ if (activeStreams.add(this)) {
+ // Update the number of active streams initiated by the endpoint.
+ createdBy().numActiveStreams++;
+
+ // Notify the listeners.
+ for (Listener listener : listeners) {
+ listener.streamActive(this);
+ }
+ }
return this;
}
@@ -399,7 +365,20 @@ public Http2Stream close() {
}
state = CLOSED;
- deactivateInternal(this);
+ if (activeStreams.remove(this)) {
+ try {
+ // Update the number of active streams initiated by the endpoint.
+ createdBy().numActiveStreams--;
+
+ // Notify the listeners.
+ for (Listener listener : listeners) {
+ listener.streamClosed(this);
+ }
+ } finally {
+ // Mark this stream for removal.
+ removalPolicy.markForRemoval(this);
+ }
+ }
return this;
}
@@ -691,16 +670,8 @@ private final class DefaultEndpoint<F extends Http2FlowController> implements En
private int lastKnownStream = -1;
private boolean pushToAllowed = true;
private F flowController;
-
- /**
- * The maximum number of active streams allowed to be created by this endpoint.
- */
- private int maxStreams;
-
- /**
- * The current number of active streams created by this endpoint.
- */
private int numActiveStreams;
+ private int maxActiveStreams;
DefaultEndpoint(boolean server) {
this.server = server;
@@ -713,7 +684,7 @@ private final class DefaultEndpoint<F extends Http2FlowController> implements En
// Push is disallowed by default for servers and allowed for clients.
pushToAllowed = !server;
- maxStreams = Integer.MAX_VALUE;
+ maxActiveStreams = Integer.MAX_VALUE;
}
@Override
@@ -730,8 +701,8 @@ public boolean createdStreamId(int streamId) {
}
@Override
- public boolean acceptingNewStreams() {
- return nextStreamId() > 0 && numActiveStreams + 1 <= maxStreams;
+ public boolean canCreateStream() {
+ return nextStreamId() > 0 && numActiveStreams + 1 <= maxActiveStreams;
}
@Override
@@ -813,13 +784,13 @@ public int numActiveStreams() {
}
@Override
- public int maxStreams() {
- return maxStreams;
+ public int maxActiveStreams() {
+ return maxActiveStreams;
}
@Override
- public void maxStreams(int maxStreams) {
- this.maxStreams = maxStreams;
+ public void maxActiveStreams(int maxActiveStreams) {
+ this.maxActiveStreams = maxActiveStreams;
}
@Override
@@ -866,7 +837,7 @@ private void checkNewStreamAllowed(int streamId) throws Http2Exception {
throw connectionError(PROTOCOL_ERROR, "Cannot create a stream since the connection is going away");
}
verifyStreamId(streamId);
- if (!acceptingNewStreams()) {
+ if (!canCreateStream()) {
throw connectionError(REFUSED_STREAM, "Maximum streams exceeded for this endpoint.");
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index f2406b2ec15..d96dd9e24f5 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -143,7 +143,7 @@ public Http2Settings localSettings() {
Http2HeaderTable headerTable = config.headerTable();
Http2FrameSizePolicy frameSizePolicy = config.frameSizePolicy();
settings.initialWindowSize(flowController().initialWindowSize());
- settings.maxConcurrentStreams(connection.remote().maxStreams());
+ settings.maxConcurrentStreams(connection.remote().maxActiveStreams());
settings.headerTableSize(headerTable.maxHeaderTableSize());
settings.maxFrameSize(frameSizePolicy.maxFrameSize());
settings.maxHeaderListSize(headerTable.maxHeaderListSize());
@@ -170,7 +170,7 @@ public void localSettings(Http2Settings settings) throws Http2Exception {
Long maxConcurrentStreams = settings.maxConcurrentStreams();
if (maxConcurrentStreams != null) {
int value = (int) Math.min(maxConcurrentStreams, Integer.MAX_VALUE);
- connection.remote().maxStreams(value);
+ connection.remote().maxActiveStreams(value);
}
Long headerTableSize = settings.headerTableSize();
@@ -322,7 +322,7 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
}
if (stream == null) {
- stream = connection.createRemoteStream(streamId).open(endOfStream);
+ stream = connection.remote().createStream(streamId).open(endOfStream);
} else {
switch (stream.state()) {
case RESERVED_REMOTE:
@@ -371,7 +371,7 @@ public void onPriorityRead(ChannelHandlerContext ctx, int streamId, int streamDe
if (stream == null) {
// PRIORITY frames always identify a stream. This means that if a PRIORITY frame is the
// first frame to be received for a stream that we must create the stream.
- stream = connection.createRemoteStream(streamId);
+ stream = connection.remote().createStream(streamId);
}
// This call will create a stream for streamDependency if necessary.
@@ -428,7 +428,7 @@ private void applyLocalSettings(Http2Settings settings) throws Http2Exception {
Long maxConcurrentStreams = settings.maxConcurrentStreams();
if (maxConcurrentStreams != null) {
int value = (int) Math.min(maxConcurrentStreams, Integer.MAX_VALUE);
- connection.remote().maxStreams(value);
+ connection.remote().maxActiveStreams(value);
}
Long headerTableSize = settings.headerTableSize();
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index ebb7a9ebabd..37b723a03bf 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -121,7 +121,7 @@ public void remoteSettings(Http2Settings settings) throws Http2Exception {
Long maxConcurrentStreams = settings.maxConcurrentStreams();
if (maxConcurrentStreams != null) {
- connection.local().maxStreams((int) Math.min(maxConcurrentStreams, Integer.MAX_VALUE));
+ connection.local().maxActiveStreams((int) Math.min(maxConcurrentStreams, Integer.MAX_VALUE));
}
Long headerTableSize = settings.headerTableSize();
@@ -194,7 +194,7 @@ public ChannelFuture writeHeaders(final ChannelHandlerContext ctx, final int str
}
Http2Stream stream = connection.stream(streamId);
if (stream == null) {
- stream = connection.createLocalStream(streamId);
+ stream = connection.local().createStream(streamId);
}
switch (stream.state()) {
@@ -235,7 +235,7 @@ public ChannelFuture writePriority(ChannelHandlerContext ctx, int streamId, int
// Update the priority on this stream.
Http2Stream stream = connection.stream(streamId);
if (stream == null) {
- stream = connection.createLocalStream(streamId);
+ stream = connection.local().createStream(streamId);
}
stream.setPriority(streamDependency, weight, exclusive);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
index 62420691f51..767cf04678d 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowController.java
@@ -73,7 +73,7 @@ public void streamActive(Http2Stream stream) {
}
@Override
- public void streamInactive(Http2Stream stream) {
+ public void streamClosed(Http2Stream stream) {
// Any pending frames can never be written, cancel and
// write errors for any pending frames.
state(stream).cancel();
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
index a4207077368..008fe84c7fe 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
@@ -28,26 +28,26 @@ public interface Http2Connection {
interface Listener {
/**
* Notifies the listener that the given stream was added to the connection. This stream may
- * not yet be active (i.e. open/half-closed).
+ * not yet be active (i.e. {@code OPEN} or {@code HALF CLOSED}).
*/
void streamAdded(Http2Stream stream);
/**
- * Notifies the listener that the given stream was made active (i.e. open in at least one
- * direction).
+ * Notifies the listener that the given stream was made active (i.e. {@code OPEN} or {@code HALF CLOSED}).
*/
void streamActive(Http2Stream stream);
/**
- * Notifies the listener that the given stream is now half-closed. The stream can be
- * inspected to determine which side is closed.
+ * Notifies the listener that the given stream is now {@code HALF CLOSED}. The stream can be
+ * inspected to determine which side is {@code CLOSED}.
*/
void streamHalfClosed(Http2Stream stream);
/**
- * Notifies the listener that the given stream is now closed in both directions.
+ * Notifies the listener that the given stream is now {@code CLOSED} in both directions and will no longer
+ * be returned by {@link #activeStreams()}.
*/
- void streamInactive(Http2Stream stream);
+ void streamClosed(Http2Stream stream);
/**
* Notifies the listener that the given stream has now been removed from the connection and
@@ -106,18 +106,18 @@ interface Endpoint<F extends Http2FlowController> {
boolean createdStreamId(int streamId);
/**
- * Indicates whether or not this endpoint is currently accepting new streams. This will be
- * be false if {@link #numActiveStreams()} + 1 >= {@link #maxStreams()} or if the stream IDs
+ * Indicates whether or not this endpoint is currently allowed to create new streams. This will be
+ * be false if {@link #numActiveStreams()} + 1 >= {@link #maxActiveStreams()} or if the stream IDs
* for this endpoint have been exhausted (i.e. {@link #nextStreamId()} < 0).
*/
- boolean acceptingNewStreams();
+ boolean canCreateStream();
/**
* Creates a stream initiated by this endpoint. This could fail for the following reasons:
* <ul>
* <li>The requested stream ID is not the next sequential ID for this endpoint.</li>
* <li>The stream already exists.</li>
- * <li>The number of concurrent streams is above the allowed threshold for this endpoint.</li>
+ * <li>{@link #canCreateStream()} is {@code false}.</li>
* <li>The connection is marked as going away.</li>
* </ul>
* <p>
@@ -135,7 +135,7 @@ interface Endpoint<F extends Http2FlowController> {
* <li>The requested stream ID is not the next sequential stream ID for this endpoint.</li>
* <li>The number of concurrent streams is above the allowed threshold for this endpoint.</li>
* <li>The connection is marked as going away.</li>
- * <li>The parent stream ID does not exist or is not open from the side sending the push
+ * <li>The parent stream ID does not exist or is not {@code OPEN} from the side sending the push
* promise.</li>
* <li>Could not set a valid priority for the new stream.</li>
* </ul>
@@ -162,19 +162,24 @@ interface Endpoint<F extends Http2FlowController> {
boolean allowPushTo();
/**
- * Gets the number of currently active streams that were created by this endpoint.
+ * Gets the number of active streams (i.e. {@code OPEN} or {@code HALF CLOSED}) that were created by this
+ * endpoint.
*/
int numActiveStreams();
/**
- * Gets the maximum number of concurrent streams allowed by this endpoint.
+ * Gets the maximum number of streams (created by this endpoint) that are allowed to be active at
+ * the same time. This is the {@code SETTINGS_MAX_CONCURRENT_STREAMS} value sent from the opposite endpoint to
+ * restrict stream creation by this endpoint.
*/
- int maxStreams();
+ int maxActiveStreams();
/**
- * Sets the maximum number of concurrent streams allowed by this endpoint.
+ * Sets the maximum number of streams (created by this endpoint) that are allowed to be active at once.
+ * This is the {@code SETTINGS_MAX_CONCURRENT_STREAMS} value sent from the opposite endpoint to
+ * restrict stream creation by this endpoint.
*/
- void maxStreams(int maxStreams);
+ void maxActiveStreams(int maxActiveStreams);
/**
* Gets the ID of the stream last successfully created by this endpoint.
@@ -231,26 +236,16 @@ interface Endpoint<F extends Http2FlowController> {
Http2Stream connectionStream();
/**
- * Gets the number of streams that actively in use. It is possible for a stream to be closed
- * but still be considered active (e.g. there is still pending data to be written).
+ * Gets the number of streams that are actively in use (i.e. {@code OPEN} or {@code HALF CLOSED}).
*/
int numActiveStreams();
/**
- * Gets all streams that are actively in use. The returned collection is
+ * Gets all streams that are actively in use (i.e. {@code OPEN} or {@code HALF CLOSED}). The returned collection is
* sorted by priority.
*/
Collection<Http2Stream> activeStreams();
- /**
- * Indicates that the given stream is no longer actively in use. If this stream was active,
- * after calling this method it will no longer appear in the list returned by
- * {@link #activeStreams()} and {@link #numActiveStreams()} will be decremented. In addition,
- * all listeners will be notified of this event via
- * {@link Listener#streamInactive(Http2Stream)}.
- */
- void deactivate(Http2Stream stream);
-
/**
* Indicates whether or not the local endpoint for this connection is the server.
*/
@@ -261,23 +256,11 @@ interface Endpoint<F extends Http2FlowController> {
*/
Endpoint<Http2LocalFlowController> local();
- /**
- * Creates a new stream initiated by the local endpoint
- * @see Endpoint#createStream(int)
- */
- Http2Stream createLocalStream(int streamId) throws Http2Exception;
-
/**
* Gets a view of this connection from the remote {@link Endpoint}.
*/
Endpoint<Http2RemoteFlowController> remote();
- /**
- * Creates a new stream initiated by the remote endpoint.
- * @see Endpoint#createStream(int)
- */
- Http2Stream createRemoteStream(int streamId) throws Http2Exception;
-
/**
* Indicates whether or not a {@code GOAWAY} was received from the remote endpoint.
*/
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java
index 5ac64ee2e09..b6b878d6364 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionAdapter.java
@@ -32,7 +32,7 @@ public void streamHalfClosed(Http2Stream stream) {
}
@Override
- public void streamInactive(Http2Stream stream) {
+ public void streamClosed(Http2Stream stream) {
}
@Override
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index f5fc5dc6fd6..1b4e974786e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -125,7 +125,7 @@ public void onHttpClientUpgrade() throws Http2Exception {
}
// Create a local stream used for the HTTP cleartext upgrade.
- connection().createLocalStream(HTTP_UPGRADE_STREAM_ID).open(true);
+ connection().local().createStream(HTTP_UPGRADE_STREAM_ID).open(true);
}
/**
@@ -144,7 +144,7 @@ public void onHttpServerUpgrade(Http2Settings settings) throws Http2Exception {
encoder.remoteSettings(settings);
// Create a stream in the half-closed state.
- connection().createRemoteStream(HTTP_UPGRADE_STREAM_ID).open(true);
+ connection().remote().createStream(HTTP_UPGRADE_STREAM_ID).open(true);
}
@Override
@@ -262,9 +262,6 @@ public void closeStream(final Http2Stream stream, ChannelFuture future) {
future.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
- // Deactivate this stream.
- connection().deactivate(stream);
-
// If this connection is closing and there are no longer any
// active streams, close after the current operation completes.
if (closeListener != null && connection().numActiveStreams() == 0) {
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java
index 002b36817fc..30a3f9a18a2 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2EventAdapter.java
@@ -96,7 +96,7 @@ public void streamHalfClosed(Http2Stream stream) {
}
@Override
- public void streamInactive(Http2Stream stream) {
+ public void streamClosed(Http2Stream stream) {
}
@Override
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index 0737811c378..5f1fa909438 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -128,20 +128,6 @@ public void setup() throws Exception {
when(local.flowController()).thenReturn(localFlow);
when(encoder.flowController()).thenReturn(remoteFlow);
when(connection.remote()).thenReturn(remote);
- doAnswer(new Answer<Http2Stream>() {
- @Override
- public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
- Object[] args = invocation.getArguments();
- return local.createStream((Integer) args[0]);
- }
- }).when(connection).createLocalStream(anyInt());
- doAnswer(new Answer<Http2Stream>() {
- @Override
- public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
- Object[] args = invocation.getArguments();
- return remote.createStream((Integer) args[0]);
- }
- }).when(connection).createRemoteStream(anyInt());
when(local.createStream(eq(STREAM_ID))).thenReturn(stream);
when(local.reservePushStream(eq(PUSH_STREAM_ID), eq(stream))).thenReturn(pushStream);
when(remote.createStream(eq(STREAM_ID))).thenReturn(stream);
@@ -389,7 +375,7 @@ public void priorityReadShouldSucceed() throws Exception {
decode().onPriorityRead(ctx, STREAM_ID, 0, (short) 255, true);
verify(stream).setPriority(eq(0), eq((short) 255), eq(true));
verify(listener).onPriorityRead(eq(ctx), eq(STREAM_ID), eq(0), eq((short) 255), eq(true));
- verify(connection).createRemoteStream(STREAM_ID);
+ verify(remote).createStream(STREAM_ID);
verify(stream, never()).open(anyBoolean());
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
index 45ccc991208..b526f473dff 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
@@ -18,10 +18,10 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
import static io.netty.handler.codec.http2.Http2CodecUtil.emptyPingBuf;
import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
+import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_LOCAL;
import static io.netty.handler.codec.http2.Http2Stream.State.IDLE;
import static io.netty.handler.codec.http2.Http2Stream.State.OPEN;
import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_LOCAL;
-import static io.netty.handler.codec.http2.Http2Stream.State.HALF_CLOSED_LOCAL;
import static io.netty.util.CharsetUtil.UTF_8;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
@@ -36,7 +36,6 @@
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.never;
-import static org.mockito.Mockito.reset;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
@@ -51,7 +50,6 @@
import io.netty.channel.DefaultChannelPromise;
import io.netty.handler.codec.http2.Http2RemoteFlowController.FlowControlled;
import io.netty.util.concurrent.ImmediateEventExecutor;
-
import org.junit.Before;
import org.junit.Test;
import org.mockito.ArgumentCaptor;
@@ -142,20 +140,6 @@ public void setup() throws Exception {
when(writer.configuration()).thenReturn(writerConfig);
when(writerConfig.frameSizePolicy()).thenReturn(frameSizePolicy);
when(frameSizePolicy.maxFrameSize()).thenReturn(64);
- doAnswer(new Answer<Http2Stream>() {
- @Override
- public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
- Object[] args = invocation.getArguments();
- return local.createStream((Integer) args[0]);
- }
- }).when(connection).createLocalStream(anyInt());
- doAnswer(new Answer<Http2Stream>() {
- @Override
- public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
- Object[] args = invocation.getArguments();
- return remote.createStream((Integer) args[0]);
- }
- }).when(connection).createRemoteStream(anyInt());
when(local.createStream(eq(STREAM_ID))).thenReturn(stream);
when(local.reservePushStream(eq(PUSH_STREAM_ID), eq(stream))).thenReturn(pushStream);
when(remote.createStream(eq(STREAM_ID))).thenReturn(stream);
@@ -389,7 +373,7 @@ public void priorityWriteShouldSetPriorityForStream() throws Exception {
encoder.writePriority(ctx, STREAM_ID, 0, (short) 255, true, promise);
verify(stream).setPriority(eq(0), eq((short) 255), eq(true));
verify(writer).writePriority(eq(ctx), eq(STREAM_ID), eq(0), eq((short) 255), eq(true), eq(promise));
- verify(connection).createLocalStream(STREAM_ID);
+ verify(local).createStream(STREAM_ID);
verify(stream, never()).open(anyBoolean());
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index 5e394104b48..18e0ba98813 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -163,7 +163,7 @@ public void newStreamNotForClientShouldThrow() throws Http2Exception {
@Test(expected = Http2Exception.class)
public void maxAllowedStreamsExceededShouldThrow() throws Http2Exception {
- server.local().maxStreams(0);
+ server.local().maxActiveStreams(0);
server.local().createStream(2).open(true);
}
@@ -214,12 +214,12 @@ public void closeOnlyOpenSideShouldClose() throws Http2Exception {
@Test(expected = Http2Exception.class)
public void localStreamInvalidStreamIdShouldThrow() throws Http2Exception {
- client.createLocalStream(Integer.MAX_VALUE + 2).open(false);
+ client.local().createStream(Integer.MAX_VALUE + 2).open(false);
}
@Test(expected = Http2Exception.class)
public void remoteStreamInvalidStreamIdShouldThrow() throws Http2Exception {
- client.createRemoteStream(Integer.MAX_VALUE + 1).open(false);
+ client.remote().createStream(Integer.MAX_VALUE + 1).open(false);
}
@Test
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
index 190c837391a..ea9844451cd 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
@@ -111,20 +111,6 @@ public void setup() throws Exception {
when(connection.local()).thenReturn(local);
when(connection.activeStreams()).thenReturn(Collections.singletonList(stream));
when(stream.open(anyBoolean())).thenReturn(stream);
- doAnswer(new Answer<Http2Stream>() {
- @Override
- public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
- Object[] args = invocation.getArguments();
- return local.createStream((Integer) args[0]);
- }
- }).when(connection).createLocalStream(anyInt());
- doAnswer(new Answer<Http2Stream>() {
- @Override
- public Http2Stream answer(InvocationOnMock invocation) throws Throwable {
- Object[] args = invocation.getArguments();
- return remote.createStream((Integer) args[0]);
- }
- }).when(connection).createRemoteStream(anyInt());
when(encoder.writeSettings(eq(ctx), any(Http2Settings.class), eq(promise))).thenReturn(future);
when(ctx.alloc()).thenReturn(UnpooledByteBufAllocator.DEFAULT);
when(ctx.channel()).thenReturn(channel);
| train | train | 2015-03-14T17:02:14 | 2015-02-27T18:22:18Z | nmittler | val |
netty/netty/2512_3506 | netty/netty | netty/netty/2512 | netty/netty/3506 | [
"timestamp(timedelta=730.0, similarity=0.8429262120825501)"
] | 4f13dee454079ce99c16c060d56875d25cbfb658 | 79374bcfc1a592b35ad60e7f1759b0c6525aeb71 | [
"I see a similar problem also in the spdy example.\n",
"@trustin for this we will need to make the toInternLogger() method public... wdyt ?\n",
"@trustin @normanmaurer is there any action that I can take on this currently? And if so, will it be backward compatible with 4.1?\n",
"@nmittler The thing is `Intern... | [
"Could we maybe add a javadoc comment telling this is for internal usage only ?\n",
"Done.\n"
] | 2015-03-17T14:53:32Z | [
"defect"
] | Hide InternalLogger from Http2FrameLogger | InternalLogger is intended for internal use only. We should hide it from users.
- Instead of `InternalLogLevel`, we can use io.netty.handler.logging.LogLevel`.
- Instead of accepting `InternalLogger`, we can accept `String` and `Class<?>` to create an `InternalLogger` in the constructor of `Http2FrameLogger`.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameLogger.java",
"example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java",
"handler/src/main/java/io/netty/handler/logging/LogLevel.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameLogger.java",
"example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java",
"handler/src/main/java/io/netty/handler/logging/LogLevel.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameLogger.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameLogger.java
index 4a9d0834534..81b6024cb18 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameLogger.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameLogger.java
@@ -16,9 +16,11 @@
package io.netty.handler.codec.http2;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufUtil;
import io.netty.channel.ChannelHandlerAdapter;
+import io.netty.handler.logging.LogLevel;
import io.netty.util.internal.logging.InternalLogLevel;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
@@ -37,11 +39,19 @@ public enum Direction {
private final InternalLogger logger;
private final InternalLogLevel level;
- public Http2FrameLogger(InternalLogLevel level) {
- this(level, InternalLoggerFactory.getInstance(Http2FrameLogger.class));
+ public Http2FrameLogger(LogLevel level) {
+ this(level.toInternalLevel(), InternalLoggerFactory.getInstance(Http2FrameLogger.class));
+ }
+
+ public Http2FrameLogger(LogLevel level, String name) {
+ this(level.toInternalLevel(), InternalLoggerFactory.getInstance(name));
+ }
+
+ public Http2FrameLogger(LogLevel level, Class<?> clazz) {
+ this(level.toInternalLevel(), InternalLoggerFactory.getInstance(clazz));
}
- public Http2FrameLogger(InternalLogLevel level, InternalLogger logger) {
+ private Http2FrameLogger(InternalLogLevel level, InternalLogger logger) {
this.level = checkNotNull(level, "level");
this.logger = checkNotNull(logger, "logger");
}
diff --git a/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java b/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
index c7c35d1cb20..842ea7184b7 100644
--- a/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
+++ b/example/src/main/java/io/netty/example/http2/client/Http2ClientInitializer.java
@@ -14,7 +14,8 @@
*/
package io.netty.example.http2.client;
-import static io.netty.util.internal.logging.InternalLogLevel.INFO;
+import static io.netty.handler.logging.LogLevel.INFO;
+
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
@@ -39,14 +40,12 @@
import io.netty.handler.codec.http2.HttpToHttp2ConnectionHandler;
import io.netty.handler.codec.http2.InboundHttp2ToHttpAdapter;
import io.netty.handler.ssl.SslContext;
-import io.netty.util.internal.logging.InternalLoggerFactory;
/**
* Configures the client pipeline to support HTTP/2 frames.
*/
public class Http2ClientInitializer extends ChannelInitializer<SocketChannel> {
- private static final Http2FrameLogger logger =
- new Http2FrameLogger(INFO, InternalLoggerFactory.getInstance(Http2ClientInitializer.class));
+ private static final Http2FrameLogger logger = new Http2FrameLogger(INFO, Http2ClientInitializer.class);
private final SslContext sslCtx;
private final int maxContentLength;
diff --git a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
index caf8142f1cf..9c9129b03d1 100644
--- a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
+++ b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
@@ -19,7 +19,8 @@
import static io.netty.buffer.Unpooled.unreleasableBuffer;
import static io.netty.example.http2.Http2ExampleUtil.UPGRADE_RESPONSE_HEADER;
import static io.netty.handler.codec.http.HttpResponseStatus.OK;
-import static io.netty.util.internal.logging.InternalLogLevel.INFO;
+import static io.netty.handler.logging.LogLevel.INFO;
+
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.AsciiString;
@@ -40,15 +41,13 @@
import io.netty.handler.codec.http2.Http2InboundFrameLogger;
import io.netty.handler.codec.http2.Http2OutboundFrameLogger;
import io.netty.util.CharsetUtil;
-import io.netty.util.internal.logging.InternalLoggerFactory;
/**
* A simple handler that responds with the message "Hello World!".
*/
public class HelloWorldHttp2Handler extends Http2ConnectionHandler {
- private static final Http2FrameLogger logger = new Http2FrameLogger(INFO,
- InternalLoggerFactory.getInstance(HelloWorldHttp2Handler.class));
+ private static final Http2FrameLogger logger = new Http2FrameLogger(INFO, HelloWorldHttp2Handler.class);
static final ByteBuf RESPONSE_BYTES = unreleasableBuffer(copiedBuffer("Hello World", CharsetUtil.UTF_8));
public HelloWorldHttp2Handler() {
diff --git a/handler/src/main/java/io/netty/handler/logging/LogLevel.java b/handler/src/main/java/io/netty/handler/logging/LogLevel.java
index 688c21264c4..f05002238e2 100644
--- a/handler/src/main/java/io/netty/handler/logging/LogLevel.java
+++ b/handler/src/main/java/io/netty/handler/logging/LogLevel.java
@@ -34,11 +34,13 @@ public enum LogLevel {
}
/**
- * Converts the specified {@link LogLevel} to its {@link InternalLogLevel} variant.
+ * For internal use only.
+ *
+ * <p/>Converts the specified {@link LogLevel} to its {@link InternalLogLevel} variant.
*
* @return the converted level.
*/
- InternalLogLevel toInternalLevel() {
+ public InternalLogLevel toInternalLevel() {
return internalLevel;
}
}
| null | train | train | 2015-03-17T07:30:17 | 2014-05-23T03:33:11Z | trustin | val |
netty/netty/3418_3524 | netty/netty | netty/netty/3418 | netty/netty/3524 | [
"timestamp(timedelta=126.0, similarity=0.9553155096245479)"
] | 2ebf07e62234d02804a6707957288f4292294f16 | 2bb4db2de769bd072f622b50a519f0298ad474af | [
"@nmittler @jpinner FYI.\n",
"HPACK-11 didn't last very long :) We now have draft 12.\n\nNote that the issue description and links have been updated.\n",
"@Scottmitch ha! ... thanks for keeping on top of this!\n",
"Will release hpack-1.0.0 once the spec gets published as an RFC. There will likely be minor AP... | [] | 2015-03-22T17:28:44Z | [
"feature"
] | HTTP/2 Draft 17 | New drafts are out. We should evaluate and make necessary updates.
https://tools.ietf.org/html/draft-ietf-httpbis-http2-17
~~https://tools.ietf.org/html/draft-ietf-httpbis-header-compression-11~~
https://tools.ietf.org/html/draft-ietf-httpbis-header-compression-12
We should coordinate with https://github.com/twitter/hpack if necessary.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/codec... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/codec... | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index d96dd9e24f5..feff3c14bae 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -19,6 +19,7 @@
import static io.netty.handler.codec.http2.Http2Error.STREAM_CLOSED;
import static io.netty.handler.codec.http2.Http2Exception.connectionError;
import static io.netty.handler.codec.http2.Http2Exception.streamError;
+import static io.netty.handler.codec.http2.Http2PromisedRequestVerifier.ALWAYS_VERIFY;
import static io.netty.handler.codec.http2.Http2Stream.State.CLOSED;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
@@ -42,6 +43,7 @@ public class DefaultHttp2ConnectionDecoder implements Http2ConnectionDecoder {
private final Http2ConnectionEncoder encoder;
private final Http2FrameReader frameReader;
private final Http2FrameListener listener;
+ private final Http2PromisedRequestVerifier requestVerifier;
private boolean prefaceReceived;
/**
@@ -53,6 +55,7 @@ public static class Builder implements Http2ConnectionDecoder.Builder {
private Http2ConnectionEncoder encoder;
private Http2FrameReader frameReader;
private Http2FrameListener listener;
+ private Http2PromisedRequestVerifier requestVerifier = ALWAYS_VERIFY;
@Override
public Builder connection(Http2Connection connection) {
@@ -89,6 +92,12 @@ public Builder encoder(Http2ConnectionEncoder encoder) {
return this;
}
+ @Override
+ public Http2ConnectionDecoder.Builder requestVerifier(Http2PromisedRequestVerifier requestVerifier) {
+ this.requestVerifier = requestVerifier;
+ return this;
+ }
+
@Override
public Http2ConnectionDecoder build() {
return new DefaultHttp2ConnectionDecoder(this);
@@ -105,6 +114,7 @@ protected DefaultHttp2ConnectionDecoder(Builder builder) {
lifecycleManager = checkNotNull(builder.lifecycleManager, "lifecycleManager");
encoder = checkNotNull(builder.encoder, "encoder");
listener = checkNotNull(builder.listener, "listener");
+ requestVerifier = checkNotNull(builder.requestVerifier, "requestVerifier");
if (connection.local().flowController() == null) {
connection.local().flowController(
new DefaultHttp2LocalFlowController(connection, encoder.frameWriter()));
@@ -508,6 +518,22 @@ public void onPushPromiseRead(ChannelHandlerContext ctx, int streamId, int promi
parentStream.id(), parentStream.state());
}
+ if (!requestVerifier.isAuthoritative(ctx, headers)) {
+ throw streamError(promisedStreamId, PROTOCOL_ERROR,
+ "Promised request on stream %d for promised stream %d is not authoritative",
+ streamId, promisedStreamId);
+ }
+ if (!requestVerifier.isCacheable(headers)) {
+ throw streamError(promisedStreamId, PROTOCOL_ERROR,
+ "Promised request on stream %d for promised stream %d is not known to be cacheable",
+ streamId, promisedStreamId);
+ }
+ if (!requestVerifier.isSafe(headers)) {
+ throw streamError(promisedStreamId, PROTOCOL_ERROR,
+ "Promised request on stream %d for promised stream %d is not known to be safe",
+ streamId, promisedStreamId);
+ }
+
// Reserve the push stream based with a priority based on the current stream's priority.
connection.remote().reservePushStream(promisedStreamId, parentStream);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java
index 70f62dc4e1a..4f4a3b31dbc 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2HeadersDecoder.java
@@ -86,7 +86,7 @@ public void addHeader(byte[] key, byte[] value, boolean sensitive) {
// Default handler for any other types of errors that may have occurred. For example,
// the the Header builder throws IllegalArgumentException if the key or value was invalid
// for any reason (e.g. the key was an invalid pseudo-header).
- throw connectionError(PROTOCOL_ERROR, e, e.getMessage());
+ throw connectionError(COMPRESSION_ERROR, e, e.getMessage());
} finally {
try {
in.close();
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index fc330620d95..308ac81ea6c 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -37,8 +37,8 @@ public final class Http2CodecUtil {
public static final int CONNECTION_STREAM_ID = 0;
public static final int HTTP_UPGRADE_STREAM_ID = 1;
public static final String HTTP_UPGRADE_SETTINGS_HEADER = "HTTP2-Settings";
- public static final String HTTP_UPGRADE_PROTOCOL_NAME = "h2c-16";
- public static final String TLS_UPGRADE_PROTOCOL_NAME = "h2-16";
+ public static final String HTTP_UPGRADE_PROTOCOL_NAME = "h2c-17";
+ public static final String TLS_UPGRADE_PROTOCOL_NAME = "h2-17";
public static final int PING_FRAME_PAYLOAD_LENGTH = 8;
public static final short MAX_UNSIGNED_BYTE = 0xFF;
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
index 9741c50aa4d..ee6604ae7d3 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
@@ -32,7 +32,6 @@ public interface Http2ConnectionDecoder extends Closeable {
* Builder for new instances of {@link Http2ConnectionDecoder}.
*/
interface Builder {
-
/**
* Sets the {@link Http2Connection} to be used when building the decoder.
*/
@@ -63,6 +62,11 @@ interface Builder {
*/
Builder encoder(Http2ConnectionEncoder encoder);
+ /**
+ * Sets the {@link Http2PromisedRequestVerifier} used when building the decoder.
+ */
+ Builder requestVerifier(Http2PromisedRequestVerifier requestVerifier);
+
/**
* Creates a new decoder instance.
*/
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java
index a834b866d0c..26072a02eb6 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2FrameListener.java
@@ -22,7 +22,6 @@
* An listener of HTTP/2 frames.
*/
public interface Http2FrameListener {
-
/**
* Handles an inbound {@code DATA} frame.
*
@@ -157,11 +156,8 @@ void onPriorityRead(ChannelHandlerContext ctx, int streamId, int streamDependenc
/**
* Handles an inbound PUSH_PROMISE frame. Only called if END_HEADERS encountered.
* <p>
- * Promised requests MUST be cacheable
- * (see <a href="https://tools.ietf.org/html/rfc7231#section-4.2.3">[RFC7231], Section 4.2.3</a>) and
- * MUST be safe (see <a href="https://tools.ietf.org/html/rfc7231#section-4.2.1">[RFC7231], Section 4.2.1</a>).
- * If these conditions do not hold the application MUST throw a {@link Http2Exception.StreamException} with
- * error type {@link Http2Error#PROTOCOL_ERROR}.
+ * Promised requests MUST be authoritative, cacheable, and safe.
+ * See <a href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-8.2">[RFC http2], Seciton 8.2</a>.
* <p>
* Only one of the following methods will be called for each HEADERS frame sequence.
* One will be called when the END_HEADERS flag has been received.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2PromisedRequestVerifier.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2PromisedRequestVerifier.java
new file mode 100644
index 00000000000..927781f910a
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2PromisedRequestVerifier.java
@@ -0,0 +1,72 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+import io.netty.channel.ChannelHandlerContext;
+
+/**
+ * Provides an extensibility point for users to define the validity of push requests.
+ * @see <a href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-8.2">[RFC http2], Section 8.2</a>.
+ */
+public interface Http2PromisedRequestVerifier {
+ /**
+ * Determine if a {@link Http2Headers} are authoritative for a particular {@link ChannelHandlerContext}.
+ * @param ctx The context on which the {@code headers} where received on.
+ * @param headers The headers to be verified.
+ * @return {@code true} if the {@code ctx} is authoritative for the {@code headers}, {@code false} otherwise.
+ * @see
+ * <a href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-10.1">[RFC http2], Section 10.1</a>.
+ */
+ boolean isAuthoritative(ChannelHandlerContext ctx, Http2Headers headers);
+
+ /**
+ * Determine if a request is cacheable.
+ * @param headers The headers for a push request.
+ * @return {@code true} if the request associated with {@code headers} is known to be cacheable,
+ * {@code false} otherwise.
+ * @see <a href="https://tools.ietf.org/html/rfc7231#section-4.2.3">[RFC 7231], Section 4.2.3</a>.
+ */
+ boolean isCacheable(Http2Headers headers);
+
+ /**
+ * Determine if a request is safe.
+ * @param headers The headers for a push request.
+ * @return {@code true} if the request associated with {@code headers} is known to be safe,
+ * {@code false} otherwise.
+ * @see <a href="https://tools.ietf.org/html/rfc7231#section-4.2.1">[RFC 7231], Section 4.2.1</a>.
+ */
+ boolean isSafe(Http2Headers headers);
+
+ /**
+ * A default implementation of {@link Http2PromisedRequestVerifier} which always returns positive responses for
+ * all verification challenges.
+ */
+ Http2PromisedRequestVerifier ALWAYS_VERIFY = new Http2PromisedRequestVerifier() {
+ @Override
+ public boolean isAuthoritative(ChannelHandlerContext ctx, Http2Headers headers) {
+ return true;
+ }
+
+ @Override
+ public boolean isCacheable(Http2Headers headers) {
+ return true;
+ }
+
+ @Override
+ public boolean isSafe(Http2Headers headers) {
+ return true;
+ }
+ };
+}
| null | train | train | 2015-03-21T16:10:24 | 2015-02-11T16:57:59Z | Scottmitch | val |
netty/netty/3518_3543 | netty/netty | netty/netty/3518 | netty/netty/3543 | [
"timestamp(timedelta=45.0, similarity=0.9056461210732123)"
] | 9737cc6cc9436fcc032daef53e194da46b039ba5 | 59bb5c18842ef99e9a95f80b9aadf6fd4edd54f1 | [
"@nmittler - FYI.\n",
"We associate priority with the flow control window. Therefore we rely on both peers sharing a common view of the priority tree. However there seems to be some confusion (on my part or in the spec) with how to provide feedback when we don't accept a priority frame. We can't rely upon send... | [
"missing `.` on EOL\n",
"missing `.` on EOL\n",
"Done.\n",
"Done.\n",
"Boy, \"writable\" sure would be a good name .... (Nate giggles while sipping his tea) :)\n",
"maybe call this `isViableBranch()` which first checks `isViable()`?\n",
"Do we need 2 methods? Could `isViable` be recursive?\n",
"Hmm ..... | 2015-03-27T18:41:15Z | [
"defect"
] | HTTP/2 Priority Tree Management | We are currently removing streams from the priority tree as they are closed (maybe subject to a stream removal policy which is being removed as part of https://github.com/netty/netty/issues/3448). However we should not remove streams from the priority tree at least until they have no dependencies which are in states eligible for data transfer (i.e. OPEN, HALF_CLOSED, ...), or may be eligible for data transfer in the future (i.e. IDLE).
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index cd507c953da..a10bc3f0cd1 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -81,7 +81,6 @@ public DefaultHttp2Connection(boolean server) {
* the policy to be used for removal of closed stream.
*/
public DefaultHttp2Connection(boolean server, Http2StreamRemovalPolicy removalPolicy) {
-
this.removalPolicy = checkNotNull(removalPolicy, "removalPolicy");
localEndpoint = new DefaultEndpoint<Http2LocalFlowController>(server);
remoteEndpoint = new DefaultEndpoint<Http2RemoteFlowController>(!server);
@@ -184,15 +183,25 @@ public void goAwaySent(int lastKnownStream, long errorCode, ByteBuf debugData) {
}
}
- private void removeStream(DefaultStream stream) {
- // Notify the listeners of the event first.
- for (Listener listener : listeners) {
- listener.onStreamRemoved(stream);
- }
+ /**
+ * Closed streams may stay in the priority tree if they have dependents that are in prioritizable states.
+ * When a stream is requested to be removed we can only actually remove that stream when there are no more
+ * prioritizable children.
+ * (see [1] {@link Http2Stream#prioritizableForTree()} and [2] {@link DefaultStream#removeChild(DefaultStream)}).
+ * When a priority tree edge changes we also have to re-evaluate viable nodes
+ * (see [3] {@link DefaultStream#takeChild(DefaultStream, boolean, List)}).
+ * @param stream The stream to remove.
+ */
+ void removeStream(DefaultStream stream) {
+ // [1] Check if this stream can be removed because it has no prioritizable descendants.
+ if (stream.parent().removeChild(stream)) {
+ // Remove it from the map and priority tree.
+ streamMap.remove(stream.id());
- // Remove it from the map and priority tree.
- streamMap.remove(stream.id());
- stream.parent().removeChild(stream);
+ for (Listener listener : listeners) {
+ listener.onStreamRemoved(stream);
+ }
+ }
}
/**
@@ -205,6 +214,7 @@ private class DefaultStream implements Http2Stream {
private DefaultStream parent;
private IntObjectMap<DefaultStream> children = newChildMap();
private int totalChildWeights;
+ private int prioritizableForTree = 1;
private boolean resetSent;
private PropertyMap data;
@@ -235,17 +245,17 @@ public Http2Stream resetSent() {
}
@Override
- public Object setProperty(Object key, Object value) {
+ public final Object setProperty(Object key, Object value) {
return data.put(key, value);
}
@Override
- public <V> V getProperty(Object key) {
+ public final <V> V getProperty(Object key) {
return data.get(key);
}
@Override
- public <V> V removeProperty(Object key) {
+ public final <V> V removeProperty(Object key) {
return data.remove(key);
}
@@ -269,6 +279,11 @@ public final DefaultStream parent() {
return parent;
}
+ @Override
+ public final int prioritizableForTree() {
+ return prioritizableForTree;
+ }
+
@Override
public final boolean isDescendantOf(Http2Stream stream) {
Http2Stream next = parent();
@@ -325,10 +340,10 @@ public Http2Stream setPriority(int parentStreamId, short weight, boolean exclusi
// Already have a priority. Re-prioritize the stream.
weight(weight);
- if (newParent != parent() || exclusive) {
- List<ParentChangedEvent> events;
+ if (newParent != parent() || (exclusive && newParent.numChildren() != 1)) {
+ final List<ParentChangedEvent> events;
if (newParent.isDescendantOf(this)) {
- events = new ArrayList<ParentChangedEvent>(2 + (exclusive ? newParent.numChildren(): 0));
+ events = new ArrayList<ParentChangedEvent>(2 + (exclusive ? newParent.numChildren() : 0));
parent.takeChild(newParent, false, events);
} else {
events = new ArrayList<ParentChangedEvent>(1 + (exclusive ? newParent.numChildren() : 0));
@@ -375,6 +390,7 @@ public Http2Stream close() {
}
state = CLOSED;
+ decrementPrioritizableForTree(1);
if (activeStreams.remove(this)) {
try {
// Update the number of active streams initiated by the endpoint.
@@ -424,6 +440,59 @@ public Http2Stream closeRemoteSide() {
return this;
}
+ /**
+ * Recursively increment the {@link #prioritizableForTree} for this object up the parent links until
+ * either we go past the root or {@code oldParent} is encountered.
+ * @param amt The amount to increment by. This must be positive.
+ * @param oldParent The previous parent for this stream.
+ */
+ private void incrementPrioritizableForTree(int amt, Http2Stream oldParent) {
+ if (amt != 0) {
+ incrementPrioritizableForTree0(amt, oldParent);
+ }
+ }
+
+ /**
+ * Direct calls to this method are discouraged.
+ * Instead use {@link #incrementPrioritizableForTree(int, Http2Stream)}.
+ */
+ private void incrementPrioritizableForTree0(int amt, Http2Stream oldParent) {
+ assert amt > 0;
+ prioritizableForTree += amt;
+ if (parent != null && parent != oldParent) {
+ parent.incrementPrioritizableForTree(amt, oldParent);
+ }
+ }
+
+ /**
+ * Recursively increment the {@link #prioritizableForTree} for this object up the parent links until
+ * either we go past the root.
+ * @param amt The amount to decrement by. This must be positive.
+ */
+ private void decrementPrioritizableForTree(int amt) {
+ if (amt != 0) {
+ decrementPrioritizableForTree0(amt);
+ }
+ }
+
+ /**
+ * Direct calls to this method are discouraged. Instead use {@link #decrementPrioritizableForTree(int)}.
+ */
+ private void decrementPrioritizableForTree0(int amt) {
+ assert amt > 0;
+ prioritizableForTree -= amt;
+ if (parent != null) {
+ parent.decrementPrioritizableForTree(amt);
+ }
+ }
+
+ /**
+ * Determine if this node by itself is considered to be valid in the priority tree.
+ */
+ private boolean isPrioritizable() {
+ return state != CLOSED;
+ }
+
private void notifyHalfClosed(Http2Stream stream) {
for (Listener listener : listeners) {
listener.onStreamHalfClosed(stream);
@@ -464,6 +533,7 @@ final void weight(short weight) {
final IntObjectMap<DefaultStream> removeAllChildren() {
totalChildWeights = 0;
+ prioritizableForTree = isPrioritizable() ? 1 : 0;
IntObjectMap<DefaultStream> prevChildren = children;
children = newChildMap();
return prevChildren;
@@ -481,8 +551,7 @@ final void takeChild(DefaultStream child, boolean exclusive, List<ParentChangedE
if (exclusive && !children.isEmpty()) {
// If it was requested that this child be the exclusive dependency of this node,
- // move any previous children to the child node, becoming grand children
- // of this node.
+ // move any previous children to the child node, becoming grand children of this node.
for (DefaultStream grandchild : removeAllChildren().values()) {
child.takeChild(grandchild, false, events);
}
@@ -490,31 +559,44 @@ final void takeChild(DefaultStream child, boolean exclusive, List<ParentChangedE
if (children.put(child.id(), child) == null) {
totalChildWeights += child.weight();
+ incrementPrioritizableForTree(child.prioritizableForTree(), oldParent);
}
if (oldParent != null && oldParent.children.remove(child.id()) != null) {
oldParent.totalChildWeights -= child.weight();
+ if (!child.isDescendantOf(oldParent)) {
+ oldParent.decrementPrioritizableForTree(child.prioritizableForTree());
+ if (oldParent.prioritizableForTree() == 0) {
+ removeStream(oldParent);
+ }
+ }
}
}
/**
* Removes the child priority and moves any of its dependencies to being direct dependencies on this node.
*/
- final void removeChild(DefaultStream child) {
- if (children.remove(child.id()) != null) {
- List<ParentChangedEvent> events = new ArrayList<ParentChangedEvent>(1 + child.children.size());
+ final boolean removeChild(DefaultStream child) {
+ if (child.prioritizableForTree() == 0 && children.remove(child.id()) != null) {
+ List<ParentChangedEvent> events = new ArrayList<ParentChangedEvent>(1 + child.numChildren());
events.add(new ParentChangedEvent(child, child.parent()));
notifyParentChanging(child, null);
child.parent = null;
totalChildWeights -= child.weight();
+ decrementPrioritizableForTree(child.prioritizableForTree());
// Move up any grand children to be directly dependent on this node.
for (DefaultStream grandchild : child.children.values()) {
takeChild(grandchild, false, events);
}
+ if (prioritizableForTree() == 0) {
+ removeStream(this);
+ }
notifyParentChanged(events);
+ return true;
}
+ return false;
}
}
@@ -644,6 +726,16 @@ private final class ConnectionStream extends DefaultStream {
super(CONNECTION_STREAM_ID);
}
+ @Override
+ public boolean isResetSent() {
+ return false;
+ }
+
+ @Override
+ public Http2Stream resetSent() {
+ throw new UnsupportedOperationException();
+ }
+
@Override
public Http2Stream setPriority(int parentStreamId, short weight, boolean exclusive) {
throw new UnsupportedOperationException();
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
index 676213d341c..0e741a5aaae 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Stream.java
@@ -157,6 +157,13 @@ enum State {
*/
Http2Stream parent();
+ /**
+ * Get the number of streams in the priority tree rooted at this node that are OK to exist in the priority
+ * tree on their own right. Some streams may be in the priority tree because their dependents require them to
+ * remain.
+ */
+ int prioritizableForTree();
+
/**
* Indicates whether or not this stream is a descendant in the priority tree from the given stream.
*/
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index 6fe7c5a28f9..197be34baef 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -15,13 +15,13 @@
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
+import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyShort;
import static org.mockito.Matchers.eq;
@@ -242,6 +242,7 @@ public void remoteStreamCanDependUponIdleStream() throws Http2Exception {
public void prioritizeShouldUseDefaults() throws Exception {
Http2Stream stream = client.local().createStream(1).open(false);
assertEquals(1, client.connectionStream().numChildren());
+ assertEquals(2, client.connectionStream().prioritizableForTree());
assertEquals(stream, client.connectionStream().child(1));
assertEquals(DEFAULT_PRIORITY_WEIGHT, stream.weight());
assertEquals(0, stream.parent().id());
@@ -253,6 +254,7 @@ public void reprioritizeWithNoChangeShouldDoNothing() throws Exception {
Http2Stream stream = client.local().createStream(1).open(false);
stream.setPriority(0, DEFAULT_PRIORITY_WEIGHT, false);
assertEquals(1, client.connectionStream().numChildren());
+ assertEquals(2, client.connectionStream().prioritizableForTree());
assertEquals(stream, client.connectionStream().child(1));
assertEquals(DEFAULT_PRIORITY_WEIGHT, stream.weight());
assertEquals(0, stream.parent().id());
@@ -275,6 +277,7 @@ public void insertExclusiveShouldAddNewLevel() throws Exception {
// Level 0
Http2Stream p = client.connectionStream();
assertEquals(1, p.numChildren());
+ assertEquals(5, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 1
@@ -282,6 +285,7 @@ public void insertExclusiveShouldAddNewLevel() throws Exception {
assertNotNull(p);
assertEquals(0, p.parent().id());
assertEquals(1, p.numChildren());
+ assertEquals(4, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 2
@@ -289,6 +293,7 @@ public void insertExclusiveShouldAddNewLevel() throws Exception {
assertNotNull(p);
assertEquals(streamA.id(), p.parent().id());
assertEquals(2, p.numChildren());
+ assertEquals(3, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 3
@@ -296,11 +301,13 @@ public void insertExclusiveShouldAddNewLevel() throws Exception {
assertNotNull(p);
assertEquals(streamD.id(), p.parent().id());
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
p = p.parent().child(streamC.id());
assertNotNull(p);
assertEquals(streamD.id(), p.parent().id());
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
}
@@ -327,10 +334,108 @@ public void weightChangeWithNoTreeChangeShouldNotifyListeners() throws Http2Exce
any(Http2Stream.class));
verify(clientListener, never()).onPriorityTreeParentChanged(any(Http2Stream.class),
any(Http2Stream.class));
+ assertEquals(5, client.connectionStream().prioritizableForTree());
+ assertEquals(4, streamA.prioritizableForTree());
+ assertEquals(1, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(3, streamD.prioritizableForTree());
+ }
+
+ @Test
+ public void sameNodeDependentShouldNotStackOverflowNorChangePrioritizableForTree() throws Http2Exception {
+ Http2Stream streamA = client.local().createStream(1).open(false);
+ Http2Stream streamB = client.local().createStream(3).open(false);
+ Http2Stream streamC = client.local().createStream(5).open(false);
+ Http2Stream streamD = client.local().createStream(7).open(false);
+
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, true);
+
+ boolean[] exclusive = new boolean[] {true, false};
+ short[] weights = new short[] { DEFAULT_PRIORITY_WEIGHT, 100, 200, streamD.weight() };
+
+ assertEquals(4, client.numActiveStreams());
+
+ Http2Stream connectionStream = client.connectionStream();
+ assertEquals(5, connectionStream.prioritizableForTree());
+ assertEquals(4, streamA.prioritizableForTree());
+ assertEquals(1, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(3, streamD.prioritizableForTree());
+
+ // The goal is to call setPriority with the same parent and vary the parameters
+ // we were at one point adding a circular depends to the tree and then throwing
+ // a StackOverflow due to infinite recursive operation.
+ for (int j = 0; j < weights.length; ++j) {
+ for (int i = 0; i < exclusive.length; ++i) {
+ streamD.setPriority(streamA.id(), weights[j], exclusive[i]);
+ assertEquals(5, connectionStream.prioritizableForTree());
+ assertEquals(4, streamA.prioritizableForTree());
+ assertEquals(1, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(3, streamD.prioritizableForTree());
+ }
+ }
}
@Test
- public void removeShouldRestructureTree() throws Exception {
+ public void multipleCircularDependencyShouldUpdatePrioritizable() throws Http2Exception {
+ Http2Stream streamA = client.local().createStream(1).open(false);
+ Http2Stream streamB = client.local().createStream(3).open(false);
+ Http2Stream streamC = client.local().createStream(5).open(false);
+ Http2Stream streamD = client.local().createStream(7).open(false);
+
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, true);
+
+ assertEquals(4, client.numActiveStreams());
+
+ Http2Stream connectionStream = client.connectionStream();
+ assertEquals(5, connectionStream.prioritizableForTree());
+ assertEquals(4, streamA.prioritizableForTree());
+ assertEquals(1, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(3, streamD.prioritizableForTree());
+
+ // Bring B to the root
+ streamA.setPriority(streamB.id(), DEFAULT_PRIORITY_WEIGHT, true);
+ assertEquals(5, connectionStream.prioritizableForTree());
+ assertEquals(3, streamA.prioritizableForTree());
+ assertEquals(4, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(2, streamD.prioritizableForTree());
+
+ // Move all streams to be children of B
+ streamC.setPriority(streamB.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamB.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ assertEquals(5, connectionStream.prioritizableForTree());
+ assertEquals(1, streamA.prioritizableForTree());
+ assertEquals(4, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(1, streamD.prioritizableForTree());
+
+ // Move A back to the root
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, true);
+ assertEquals(5, connectionStream.prioritizableForTree());
+ assertEquals(4, streamA.prioritizableForTree());
+ assertEquals(3, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(1, streamD.prioritizableForTree());
+
+ // Move all streams to be children of A
+ streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ assertEquals(5, connectionStream.prioritizableForTree());
+ assertEquals(4, streamA.prioritizableForTree());
+ assertEquals(1, streamB.prioritizableForTree());
+ assertEquals(1, streamC.prioritizableForTree());
+ assertEquals(1, streamD.prioritizableForTree());
+ }
+
+ @Test
+ public void removeWithPrioritizableDependentsShouldNotRestructureTree() throws Exception {
Http2Stream streamA = client.local().createStream(1).open(false);
Http2Stream streamB = client.local().createStream(3).open(false);
Http2Stream streamC = client.local().createStream(5).open(false);
@@ -345,27 +450,166 @@ public void removeShouldRestructureTree() throws Exception {
// Level 0
Http2Stream p = client.connectionStream();
+ assertEquals(4, p.prioritizableForTree());
assertEquals(1, p.numChildren());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 1
p = p.child(streamA.id());
assertNotNull(p);
+ assertEquals(3, p.prioritizableForTree());
assertEquals(0, p.parent().id());
- assertEquals(2, p.numChildren());
+ assertEquals(1, p.numChildren());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 2
- p = p.child(streamC.id());
+ p = p.child(streamB.id());
assertNotNull(p);
+ assertEquals(2, p.prioritizableForTree());
assertEquals(streamA.id(), p.parent().id());
+ assertEquals(2, p.numChildren());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 3
+ p = p.child(streamC.id());
+ assertNotNull(p);
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(streamB.id(), p.parent().id());
assertEquals(0, p.numChildren());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
p = p.parent().child(streamD.id());
assertNotNull(p);
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(streamB.id(), p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+ }
+
+ @Test
+ public void closeWithNoPrioritizableDependentsShouldRestructureTree() throws Exception {
+ Http2Stream streamA = client.local().createStream(1).open(false);
+ Http2Stream streamB = client.local().createStream(3).open(false);
+ Http2Stream streamC = client.local().createStream(5).open(false);
+ Http2Stream streamD = client.local().createStream(7).open(false);
+ Http2Stream streamE = client.local().createStream(9).open(false);
+ Http2Stream streamF = client.local().createStream(11).open(false);
+
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamC.setPriority(streamB.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamB.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamE.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamF.setPriority(streamD.id(), DEFAULT_PRIORITY_WEIGHT, false);
+
+ // Close internal nodes, leave 1 leaf node open, and ensure part of the tree (D & F) is cleaned up
+ streamA.close();
+ streamB.close();
+ streamC.close();
+ streamD.close();
+ streamF.close();
+
+ // Level 0
+ Http2Stream p = client.connectionStream();
+ assertEquals(1, p.numChildren());
+ assertEquals(2, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 1
+ p = p.child(streamA.id());
+ assertNotNull(p);
+ assertEquals(0, p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 2
+ p = p.child(streamB.id());
+ assertNotNull(p);
assertEquals(streamA.id(), p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 3
+ p = p.child(streamC.id());
+ assertNotNull(p);
+ assertEquals(streamB.id(), p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 4
+ p = p.child(streamE.id());
+ assertNotNull(p);
+ assertEquals(streamC.id(), p.parent().id());
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ }
+
+ @Test
+ public void priorityChangeWithNoPrioritizableDependentsShouldRestructureTree() throws Exception {
+ Http2Stream streamA = client.local().createStream(1).open(false);
+ Http2Stream streamB = client.local().createStream(3).open(false);
+ Http2Stream streamC = client.local().createStream(5).open(false);
+ Http2Stream streamD = client.local().createStream(7).open(false);
+ Http2Stream streamE = client.local().createStream(9).open(false);
+ Http2Stream streamF = client.local().createStream(11).open(false);
+
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamC.setPriority(streamB.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamB.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamE.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamF.setPriority(streamD.id(), DEFAULT_PRIORITY_WEIGHT, false);
+
+ // Leave leaf nodes open (E & F)
+ streamA.close();
+ streamB.close();
+ streamC.close();
+ streamD.close();
+
+ // Move F to depend on C, this should close D
+ streamF.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+
+ // Level 0
+ Http2Stream p = client.connectionStream();
+ assertEquals(1, p.numChildren());
+ assertEquals(3, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 1
+ p = p.child(streamA.id());
+ assertNotNull(p);
+ assertEquals(0, p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(2, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 2
+ p = p.child(streamB.id());
+ assertNotNull(p);
+ assertEquals(streamA.id(), p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(2, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 3
+ p = p.child(streamC.id());
+ assertNotNull(p);
+ assertEquals(streamB.id(), p.parent().id());
+ assertEquals(2, p.numChildren());
+ assertEquals(2, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 4
+ p = p.child(streamE.id());
+ assertNotNull(p);
+ assertEquals(streamC.id(), p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ p = p.parent().child(streamF.id());
+ assertNotNull(p);
+ assertEquals(streamC.id(), p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
}
@Test
@@ -423,18 +667,21 @@ public void circularDependencyShouldRestructureTree() throws Exception {
// Level 0
Http2Stream p = client.connectionStream();
assertEquals(1, p.numChildren());
+ assertEquals(7, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 1
p = p.child(streamD.id());
assertNotNull(p);
assertEquals(2, p.numChildren());
+ assertEquals(6, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 2
p = p.child(streamF.id());
assertNotNull(p);
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
p = p.parent().child(streamA.id());
assertNotNull(p);
@@ -445,16 +692,19 @@ public void circularDependencyShouldRestructureTree() throws Exception {
p = p.child(streamB.id());
assertNotNull(p);
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
p = p.parent().child(streamC.id());
assertNotNull(p);
assertEquals(1, p.numChildren());
+ assertEquals(2, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 4;
p = p.child(streamE.id());
assertNotNull(p);
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
}
@@ -515,38 +765,45 @@ public void circularDependencyWithExclusiveShouldRestructureTree() throws Except
// Level 0
Http2Stream p = client.connectionStream();
assertEquals(1, p.numChildren());
+ assertEquals(7, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 1
p = p.child(streamD.id());
assertNotNull(p);
assertEquals(1, p.numChildren());
+ assertEquals(6, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 2
p = p.child(streamA.id());
assertNotNull(p);
assertEquals(3, p.numChildren());
+ assertEquals(5, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 3
p = p.child(streamB.id());
assertNotNull(p);
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
p = p.parent().child(streamF.id());
assertNotNull(p);
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
p = p.parent().child(streamC.id());
assertNotNull(p);
assertEquals(1, p.numChildren());
+ assertEquals(2, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
// Level 4;
p = p.child(streamE.id());
assertNotNull(p);
assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
}
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
index eeae7ca716b..064520f97e7 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2RemoteFlowControllerTest.java
@@ -932,7 +932,7 @@ public void subTreeBytesShouldBeCorrectWithAddition() throws Http2Exception {
}
/**
- * In this test, we block all streams and remove a node from the priority tree and verify
+ * In this test, we block all streams and close an internal stream in the priority tree but tree should not change
*
* <pre>
* [0]
@@ -941,17 +941,76 @@ public void subTreeBytesShouldBeCorrectWithAddition() throws Http2Exception {
* / \
* C D
* </pre>
+ */
+ @Test
+ public void subTreeBytesShouldBeCorrectWithInternalStreamClose() throws Http2Exception {
+ // Block the connection
+ exhaustStreamWindow(CONNECTION_STREAM_ID);
+
+ Http2Stream stream0 = connection.connectionStream();
+ Http2Stream streamA = connection.stream(STREAM_A);
+ Http2Stream streamB = connection.stream(STREAM_B);
+ Http2Stream streamC = connection.stream(STREAM_C);
+ Http2Stream streamD = connection.stream(STREAM_D);
+
+ // Send a bunch of data on each stream.
+ final IntObjectMap<Integer> streamSizes = new IntObjectHashMap<Integer>(4);
+ streamSizes.put(STREAM_A, 400);
+ streamSizes.put(STREAM_B, 500);
+ streamSizes.put(STREAM_C, 600);
+ streamSizes.put(STREAM_D, 700);
+
+ FakeFlowControlled dataA = new FakeFlowControlled(streamSizes.get(STREAM_A));
+ FakeFlowControlled dataB = new FakeFlowControlled(streamSizes.get(STREAM_B));
+ FakeFlowControlled dataC = new FakeFlowControlled(streamSizes.get(STREAM_C));
+ FakeFlowControlled dataD = new FakeFlowControlled(streamSizes.get(STREAM_D));
+
+ sendData(STREAM_A, dataA);
+ sendData(STREAM_B, dataB);
+ sendData(STREAM_C, dataC);
+ sendData(STREAM_D, dataD);
+
+ dataA.assertNotWritten();
+ dataB.assertNotWritten();
+ dataC.assertNotWritten();
+ dataD.assertNotWritten();
+
+ streamA.close();
+
+ assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B, STREAM_C, STREAM_D)),
+ streamableBytesForTree(stream0));
+ assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C, STREAM_D)),
+ streamableBytesForTree(streamA));
+ assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B)),
+ streamableBytesForTree(streamB));
+ assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C)),
+ streamableBytesForTree(streamC));
+ assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_D)),
+ streamableBytesForTree(streamD));
+ }
+
+ /**
+ * In this test, we block all streams and close a leaf stream in the priority tree and verify
*
- * After the tree shift:
+ * <pre>
+ * [0]
+ * / \
+ * A B
+ * / \
+ * C D
+ * </pre>
*
+ * After the close:
* <pre>
* [0]
- * / | \
- * C D B
+ * / \
+ * A B
+ * |
+ * D
* </pre>
*/
@Test
- public void subTreeBytesShouldBeCorrectWithRemoval() throws Http2Exception {
+ public void subTreeBytesShouldBeCorrectWithLeafStreamClose() throws Http2Exception {
// Block the connection
exhaustStreamWindow(CONNECTION_STREAM_ID);
@@ -983,15 +1042,15 @@ public void subTreeBytesShouldBeCorrectWithRemoval() throws Http2Exception {
dataC.assertNotWritten();
dataD.assertNotWritten();
- streamA.close();
+ streamC.close();
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B, STREAM_C, STREAM_D)),
+ assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_A, STREAM_B, STREAM_D)),
streamableBytesForTree(stream0));
- assertEquals(0, streamableBytesForTree(streamA));
+ assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_A, STREAM_D)),
+ streamableBytesForTree(streamA));
assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_B)),
streamableBytesForTree(streamB));
- assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_C)),
- streamableBytesForTree(streamC));
+ assertEquals(0, streamableBytesForTree(streamC));
assertEquals(calculateStreamSizeSum(streamSizes, Arrays.asList(STREAM_D)),
streamableBytesForTree(streamD));
}
| train | train | 2015-03-31T18:18:26 | 2015-03-20T06:34:00Z | Scottmitch | val |
netty/netty/3530_3545 | netty/netty | netty/netty/3530 | netty/netty/3545 | [
"timestamp(timedelta=73.0, similarity=0.9107620806503808)"
] | 1e7eabc58c78217708cbb20e62095cf3c7750ac8 | f81a87a2005575552e8c9d24c15573c2c4023fae | [] | [
"This is the main ugly bit. A chicken and the egg problem ... handler needs the encoder/decoder and they need the handler. If you have a better solution, I'm all ears.\n",
"@nmittler could we just get rid of the builder altogether? It's only 5 arguments and we are not doing anything special in the builder. Why ... | 2015-03-27T22:38:45Z | [
"improvement"
] | Clean up use of builders in Http2ConnectionHandler | Currently Http2ConnectionHandler takes builders for the encoder and decoder, which makes it difficult to decorate them. We should look at getting rid of this.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/i... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionDecoder.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionEncoder.java",
"codec-http2/src/main/... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java",
"codec-http2/src/test/java/i... | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
index 5adc5e0af51..04f01b93415 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/CompressorHttp2ConnectionEncoder.java
@@ -34,10 +34,10 @@
import io.netty.handler.codec.compression.ZlibWrapper;
/**
- * A HTTP2 encoder that will compress data frames according to the {@code content-encoding} header for each stream.
- * The compression provided by this class will be applied to the data for the entire stream.
+ * A decorating HTTP2 encoder that will compress data frames according to the {@code content-encoding} header for each
+ * stream. The compression provided by this class will be applied to the data for the entire stream.
*/
-public class CompressorHttp2ConnectionEncoder extends DefaultHttp2ConnectionEncoder {
+public class CompressorHttp2ConnectionEncoder extends DecoratingHttp2ConnectionEncoder {
private static final Http2ConnectionAdapter CLEAN_UP_LISTENER = new Http2ConnectionAdapter() {
@Override
public void streamRemoved(Http2Stream stream) {
@@ -48,53 +48,33 @@ public void streamRemoved(Http2Stream stream) {
}
};
+ public static final int DEFAULT_COMPRESSION_LEVEL = 6;
+ public static final int DEFAULT_WINDOW_BITS = 15;
+ public static final int DEFAULT_MEM_LEVEL = 8;
+
private final int compressionLevel;
private final int windowBits;
private final int memLevel;
- /**
- * Builder for new instances of {@link CompressorHttp2ConnectionEncoder}
- */
- public static class Builder extends DefaultHttp2ConnectionEncoder.Builder {
- protected int compressionLevel = 6;
- protected int windowBits = 15;
- protected int memLevel = 8;
-
- public Builder compressionLevel(int compressionLevel) {
- this.compressionLevel = compressionLevel;
- return this;
- }
-
- public Builder windowBits(int windowBits) {
- this.windowBits = windowBits;
- return this;
- }
-
- public Builder memLevel(int memLevel) {
- this.memLevel = memLevel;
- return this;
- }
-
- @Override
- public CompressorHttp2ConnectionEncoder build() {
- return new CompressorHttp2ConnectionEncoder(this);
- }
+ public CompressorHttp2ConnectionEncoder(Http2ConnectionEncoder delegate) {
+ this(delegate, DEFAULT_COMPRESSION_LEVEL, DEFAULT_WINDOW_BITS, DEFAULT_MEM_LEVEL);
}
- protected CompressorHttp2ConnectionEncoder(Builder builder) {
- super(builder);
- if (builder.compressionLevel < 0 || builder.compressionLevel > 9) {
- throw new IllegalArgumentException("compressionLevel: " + builder.compressionLevel + " (expected: 0-9)");
+ public CompressorHttp2ConnectionEncoder(Http2ConnectionEncoder delegate, int compressionLevel, int windowBits,
+ int memLevel) {
+ super(delegate);
+ if (compressionLevel < 0 || compressionLevel > 9) {
+ throw new IllegalArgumentException("compressionLevel: " + compressionLevel + " (expected: 0-9)");
}
- if (builder.windowBits < 9 || builder.windowBits > 15) {
- throw new IllegalArgumentException("windowBits: " + builder.windowBits + " (expected: 9-15)");
+ if (windowBits < 9 || windowBits > 15) {
+ throw new IllegalArgumentException("windowBits: " + windowBits + " (expected: 9-15)");
}
- if (builder.memLevel < 1 || builder.memLevel > 9) {
- throw new IllegalArgumentException("memLevel: " + builder.memLevel + " (expected: 1-9)");
+ if (memLevel < 1 || memLevel > 9) {
+ throw new IllegalArgumentException("memLevel: " + memLevel + " (expected: 1-9)");
}
- compressionLevel = builder.compressionLevel;
- windowBits = builder.windowBits;
- memLevel = builder.memLevel;
+ this.compressionLevel = compressionLevel;
+ this.windowBits = windowBits;
+ this.memLevel = memLevel;
connection().addListener(CLEAN_UP_LISTENER);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionDecoder.java
new file mode 100644
index 00000000000..4e056343cb9
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionDecoder.java
@@ -0,0 +1,78 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelHandlerContext;
+
+import java.util.List;
+
+/**
+ * Decorator around another {@link Http2ConnectionDecoder} instance.
+ */
+public class DecoratingHttp2ConnectionDecoder implements Http2ConnectionDecoder {
+ private final Http2ConnectionDecoder delegate;
+
+ public DecoratingHttp2ConnectionDecoder(Http2ConnectionDecoder delegate) {
+ this.delegate = checkNotNull(delegate, "delegate");
+ }
+
+ @Override
+ public void lifecycleManager(Http2LifecycleManager lifecycleManager) {
+ delegate.lifecycleManager(lifecycleManager);
+ }
+
+ @Override
+ public Http2Connection connection() {
+ return delegate.connection();
+ }
+
+ @Override
+ public Http2LocalFlowController flowController() {
+ return delegate.flowController();
+ }
+
+ @Override
+ public Http2FrameListener listener() {
+ return delegate.listener();
+ }
+
+ @Override
+ public void decodeFrame(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Http2Exception {
+ delegate.decodeFrame(ctx, in, out);
+ }
+
+ @Override
+ public Http2Settings localSettings() {
+ return delegate.localSettings();
+ }
+
+ @Override
+ public void localSettings(Http2Settings settings) throws Http2Exception {
+ delegate.localSettings(settings);
+ }
+
+ @Override
+ public boolean prefaceReceived() {
+ return delegate.prefaceReceived();
+ }
+
+ @Override
+ public void close() {
+ delegate.close();
+ }
+}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionEncoder.java
new file mode 100644
index 00000000000..5c4e95a5637
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2ConnectionEncoder.java
@@ -0,0 +1,59 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
+/**
+ * A decorator around another {@link Http2ConnectionEncoder} instance.
+ */
+public class DecoratingHttp2ConnectionEncoder extends DecoratingHttp2FrameWriter implements Http2ConnectionEncoder {
+ private final Http2ConnectionEncoder delegate;
+
+ public DecoratingHttp2ConnectionEncoder(Http2ConnectionEncoder delegate) {
+ super(delegate);
+ this.delegate = checkNotNull(delegate, "delegate");
+ }
+
+ @Override
+ public void lifecycleManager(Http2LifecycleManager lifecycleManager) {
+ delegate.lifecycleManager(lifecycleManager);
+ }
+
+ @Override
+ public Http2Connection connection() {
+ return delegate.connection();
+ }
+
+ @Override
+ public Http2RemoteFlowController flowController() {
+ return delegate.flowController();
+ }
+
+ @Override
+ public Http2FrameWriter frameWriter() {
+ return delegate.frameWriter();
+ }
+
+ @Override
+ public Http2Settings pollSentSettings() {
+ return delegate.pollSentSettings();
+ }
+
+ @Override
+ public void remoteSettings(Http2Settings settings) throws Http2Exception {
+ delegate.remoteSettings(settings);
+ }
+}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2FrameWriter.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2FrameWriter.java
new file mode 100644
index 00000000000..2d2f6505b19
--- /dev/null
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DecoratingHttp2FrameWriter.java
@@ -0,0 +1,114 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License. You may obtain a
+ * copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
+ * or implied. See the License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package io.netty.handler.codec.http2;
+
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPromise;
+
+/**
+ * Decorator around another {@link Http2FrameWriter} instance.
+ */
+public class DecoratingHttp2FrameWriter implements Http2FrameWriter {
+ private final Http2FrameWriter delegate;
+
+ public DecoratingHttp2FrameWriter(Http2FrameWriter delegate) {
+ this.delegate = checkNotNull(delegate, "delegate");
+ }
+
+ @Override
+ public ChannelFuture writeData(ChannelHandlerContext ctx, int streamId, ByteBuf data, int padding,
+ boolean endStream, ChannelPromise promise) {
+ return delegate.writeData(ctx, streamId, data, padding, endStream, promise);
+ }
+
+ @Override
+ public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int padding,
+ boolean endStream, ChannelPromise promise) {
+ return delegate.writeHeaders(ctx, streamId, headers, padding, endStream, promise);
+ }
+
+ @Override
+ public ChannelFuture writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers headers,
+ int streamDependency, short weight, boolean exclusive, int padding,
+ boolean endStream, ChannelPromise promise) {
+ return delegate
+ .writeHeaders(ctx, streamId, headers, streamDependency, weight, exclusive, padding, endStream, promise);
+ }
+
+ @Override
+ public ChannelFuture writePriority(ChannelHandlerContext ctx, int streamId, int streamDependency, short weight,
+ boolean exclusive, ChannelPromise promise) {
+ return delegate.writePriority(ctx, streamId, streamDependency, weight, exclusive, promise);
+ }
+
+ @Override
+ public ChannelFuture writeRstStream(ChannelHandlerContext ctx, int streamId, long errorCode,
+ ChannelPromise promise) {
+ return delegate.writeRstStream(ctx, streamId, errorCode, promise);
+ }
+
+ @Override
+ public ChannelFuture writeSettings(ChannelHandlerContext ctx, Http2Settings settings, ChannelPromise promise) {
+ return delegate.writeSettings(ctx, settings, promise);
+ }
+
+ @Override
+ public ChannelFuture writeSettingsAck(ChannelHandlerContext ctx, ChannelPromise promise) {
+ return delegate.writeSettingsAck(ctx, promise);
+ }
+
+ @Override
+ public ChannelFuture writePing(ChannelHandlerContext ctx, boolean ack, ByteBuf data, ChannelPromise promise) {
+ return delegate.writePing(ctx, ack, data, promise);
+ }
+
+ @Override
+ public ChannelFuture writePushPromise(ChannelHandlerContext ctx, int streamId, int promisedStreamId,
+ Http2Headers headers, int padding, ChannelPromise promise) {
+ return delegate.writePushPromise(ctx, streamId, promisedStreamId, headers, padding, promise);
+ }
+
+ @Override
+ public ChannelFuture writeGoAway(ChannelHandlerContext ctx, int lastStreamId, long errorCode, ByteBuf debugData,
+ ChannelPromise promise) {
+ return delegate.writeGoAway(ctx, lastStreamId, errorCode, debugData, promise);
+ }
+
+ @Override
+ public ChannelFuture writeWindowUpdate(ChannelHandlerContext ctx, int streamId, int windowSizeIncrement,
+ ChannelPromise promise) {
+ return delegate.writeWindowUpdate(ctx, streamId, windowSizeIncrement, promise);
+ }
+
+ @Override
+ public ChannelFuture writeFrame(ChannelHandlerContext ctx, byte frameType, int streamId, Http2Flags flags,
+ ByteBuf payload, ChannelPromise promise) {
+ return delegate.writeFrame(ctx, frameType, streamId, flags, payload, promise);
+ }
+
+ @Override
+ public Configuration configuration() {
+ return delegate.configuration();
+ }
+
+ @Override
+ public void close() {
+ delegate.close();
+ }
+}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index 33fbc1cf707..f71f5dcd926 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -40,87 +40,40 @@
public class DefaultHttp2ConnectionDecoder implements Http2ConnectionDecoder {
private Http2FrameListener internalFrameListener = new PrefaceFrameListener();
private final Http2Connection connection;
- private final Http2LifecycleManager lifecycleManager;
+ private Http2LifecycleManager lifecycleManager;
private final Http2ConnectionEncoder encoder;
private final Http2FrameReader frameReader;
private final Http2FrameListener listener;
private final Http2PromisedRequestVerifier requestVerifier;
- /**
- * Builder for instances of {@link DefaultHttp2ConnectionDecoder}.
- */
- public static class Builder implements Http2ConnectionDecoder.Builder {
- private Http2Connection connection;
- private Http2LifecycleManager lifecycleManager;
- private Http2ConnectionEncoder encoder;
- private Http2FrameReader frameReader;
- private Http2FrameListener listener;
- private Http2PromisedRequestVerifier requestVerifier = ALWAYS_VERIFY;
-
- @Override
- public Builder connection(Http2Connection connection) {
- this.connection = connection;
- return this;
- }
-
- @Override
- public Builder lifecycleManager(Http2LifecycleManager lifecycleManager) {
- this.lifecycleManager = lifecycleManager;
- return this;
- }
-
- @Override
- public Http2LifecycleManager lifecycleManager() {
- return lifecycleManager;
- }
-
- @Override
- public Builder frameReader(Http2FrameReader frameReader) {
- this.frameReader = frameReader;
- return this;
- }
-
- @Override
- public Builder listener(Http2FrameListener listener) {
- this.listener = listener;
- return this;
- }
-
- @Override
- public Builder encoder(Http2ConnectionEncoder encoder) {
- this.encoder = encoder;
- return this;
- }
-
- @Override
- public Http2ConnectionDecoder.Builder requestVerifier(Http2PromisedRequestVerifier requestVerifier) {
- this.requestVerifier = requestVerifier;
- return this;
- }
-
- @Override
- public Http2ConnectionDecoder build() {
- return new DefaultHttp2ConnectionDecoder(this);
- }
- }
-
- public static Builder newBuilder() {
- return new Builder();
+ public DefaultHttp2ConnectionDecoder(Http2Connection connection,
+ Http2ConnectionEncoder encoder,
+ Http2FrameReader frameReader,
+ Http2FrameListener listener) {
+ this(connection, encoder, frameReader, listener, ALWAYS_VERIFY);
}
- protected DefaultHttp2ConnectionDecoder(Builder builder) {
- connection = checkNotNull(builder.connection, "connection");
- frameReader = checkNotNull(builder.frameReader, "frameReader");
- lifecycleManager = checkNotNull(builder.lifecycleManager, "lifecycleManager");
- encoder = checkNotNull(builder.encoder, "encoder");
- listener = checkNotNull(builder.listener, "listener");
- requestVerifier = checkNotNull(builder.requestVerifier, "requestVerifier");
+ public DefaultHttp2ConnectionDecoder(Http2Connection connection,
+ Http2ConnectionEncoder encoder,
+ Http2FrameReader frameReader,
+ Http2FrameListener listener,
+ Http2PromisedRequestVerifier requestVerifier) {
+ this.connection = checkNotNull(connection, "connection");
+ this.frameReader = checkNotNull(frameReader, "frameReader");
+ this.encoder = checkNotNull(encoder, "encoder");
+ this.listener = checkNotNull(listener, "listener");
+ this.requestVerifier = checkNotNull(requestVerifier, "requestVerifier");
if (connection.local().flowController() == null) {
connection.local().flowController(
new DefaultHttp2LocalFlowController(connection, encoder.frameWriter()));
}
}
+ @Override
+ public void lifecycleManager(Http2LifecycleManager lifecycleManager) {
+ this.lifecycleManager = checkNotNull(lifecycleManager, "lifecycleManager");
+ }
+
@Override
public Http2Connection connection() {
return connection;
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
index 0e0bca4d808..3f908bcb2f7 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoder.java
@@ -35,63 +35,24 @@
public class DefaultHttp2ConnectionEncoder implements Http2ConnectionEncoder {
private final Http2FrameWriter frameWriter;
private final Http2Connection connection;
- private final Http2LifecycleManager lifecycleManager;
+ private Http2LifecycleManager lifecycleManager;
// We prefer ArrayDeque to LinkedList because later will produce more GC.
// This initial capacity is plenty for SETTINGS traffic.
private final ArrayDeque<Http2Settings> outstandingLocalSettingsQueue = new ArrayDeque<Http2Settings>(4);
- /**
- * Builder for new instances of {@link DefaultHttp2ConnectionEncoder}.
- */
- public static class Builder implements Http2ConnectionEncoder.Builder {
- protected Http2FrameWriter frameWriter;
- protected Http2Connection connection;
- protected Http2LifecycleManager lifecycleManager;
-
- @Override
- public Builder connection(
- Http2Connection connection) {
- this.connection = connection;
- return this;
- }
-
- @Override
- public Builder lifecycleManager(
- Http2LifecycleManager lifecycleManager) {
- this.lifecycleManager = lifecycleManager;
- return this;
- }
-
- @Override
- public Http2LifecycleManager lifecycleManager() {
- return lifecycleManager;
- }
-
- @Override
- public Builder frameWriter(Http2FrameWriter frameWriter) {
- this.frameWriter = frameWriter;
- return this;
- }
-
- @Override
- public Http2ConnectionEncoder build() {
- return new DefaultHttp2ConnectionEncoder(this);
- }
- }
-
- public static Builder newBuilder() {
- return new Builder();
- }
-
- protected DefaultHttp2ConnectionEncoder(Builder builder) {
- connection = checkNotNull(builder.connection, "connection");
- frameWriter = checkNotNull(builder.frameWriter, "frameWriter");
- lifecycleManager = checkNotNull(builder.lifecycleManager, "lifecycleManager");
+ public DefaultHttp2ConnectionEncoder(Http2Connection connection, Http2FrameWriter frameWriter) {
+ this.connection = checkNotNull(connection, "connection");
+ this.frameWriter = checkNotNull(frameWriter, "frameWriter");
if (connection.remote().flowController() == null) {
connection.remote().flowController(new DefaultHttp2RemoteFlowController(connection));
}
}
+ @Override
+ public void lifecycleManager(Http2LifecycleManager lifecycleManager) {
+ this.lifecycleManager = checkNotNull(lifecycleManager, "lifecycleManager");
+ }
+
@Override
public Http2FrameWriter frameWriter() {
return frameWriter;
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
index ee6604ae7d3..b2c68e79b03 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionDecoder.java
@@ -29,49 +29,9 @@
public interface Http2ConnectionDecoder extends Closeable {
/**
- * Builder for new instances of {@link Http2ConnectionDecoder}.
+ * Sets the lifecycle manager. Must be called as part of initialization before the decoder is used.
*/
- interface Builder {
- /**
- * Sets the {@link Http2Connection} to be used when building the decoder.
- */
- Builder connection(Http2Connection connection);
-
- /**
- * Sets the {@link Http2LifecycleManager} to be used when building the decoder.
- */
- Builder lifecycleManager(Http2LifecycleManager lifecycleManager);
-
- /**
- * Gets the {@link Http2LifecycleManager} to be used when building the decoder.
- */
- Http2LifecycleManager lifecycleManager();
-
- /**
- * Sets the {@link Http2FrameReader} to be used when building the decoder.
- */
- Builder frameReader(Http2FrameReader frameReader);
-
- /**
- * Sets the {@link Http2FrameListener} to be used when building the decoder.
- */
- Builder listener(Http2FrameListener listener);
-
- /**
- * Sets the {@link Http2ConnectionEncoder} used when building the decoder.
- */
- Builder encoder(Http2ConnectionEncoder encoder);
-
- /**
- * Sets the {@link Http2PromisedRequestVerifier} used when building the decoder.
- */
- Builder requestVerifier(Http2PromisedRequestVerifier requestVerifier);
-
- /**
- * Creates a new decoder instance.
- */
- Http2ConnectionDecoder build();
- }
+ void lifecycleManager(Http2LifecycleManager lifecycleManager);
/**
* Provides direct access to the underlying connection.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java
index 6403fd1fa6b..edb1a1d1752 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionEncoder.java
@@ -26,35 +26,9 @@
public interface Http2ConnectionEncoder extends Http2FrameWriter {
/**
- * Builder for new instances of {@link Http2ConnectionEncoder}.
+ * Sets the lifecycle manager. Must be called as part of initialization before the encoder is used.
*/
- interface Builder {
-
- /**
- * Sets the {@link Http2Connection} to be used when building the encoder.
- */
- Builder connection(Http2Connection connection);
-
- /**
- * Sets the {@link Http2LifecycleManager} to be used when building the encoder.
- */
- Builder lifecycleManager(Http2LifecycleManager lifecycleManager);
-
- /**
- * Gets the {@link Http2LifecycleManager} to be used when building the encoder.
- */
- Http2LifecycleManager lifecycleManager();
-
- /**
- * Sets the {@link Http2FrameWriter} to be used when building the encoder.
- */
- Builder frameWriter(Http2FrameWriter frameWriter);
-
- /**
- * Creates a new encoder instance.
- */
- Http2ConnectionEncoder build();
- }
+ void lifecycleManager(Http2LifecycleManager lifecycleManager);
/**
* Provides direct access to the underlying connection.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 0af27f0d410..46e6e01a303 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -60,38 +60,19 @@ public Http2ConnectionHandler(Http2Connection connection, Http2FrameListener lis
}
public Http2ConnectionHandler(Http2Connection connection, Http2FrameReader frameReader,
- Http2FrameWriter frameWriter, Http2FrameListener listener) {
- this(DefaultHttp2ConnectionDecoder.newBuilder().connection(connection)
- .frameReader(frameReader).listener(listener),
- DefaultHttp2ConnectionEncoder.newBuilder().connection(connection)
- .frameWriter(frameWriter));
+ Http2FrameWriter frameWriter, Http2FrameListener listener) {
+ encoder = new DefaultHttp2ConnectionEncoder(connection, frameWriter);
+ decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, frameReader, listener);
}
/**
- * Constructor for pre-configured encoder and decoder builders. Just sets the {@code this} as the
+ * Constructor for pre-configured encoder and decoder. Just sets the {@code this} as the
* {@link Http2LifecycleManager} and builds them.
*/
- public Http2ConnectionHandler(Http2ConnectionDecoder.Builder decoderBuilder,
- Http2ConnectionEncoder.Builder encoderBuilder) {
- checkNotNull(decoderBuilder, "decoderBuilder");
- checkNotNull(encoderBuilder, "encoderBuilder");
-
- if (encoderBuilder.lifecycleManager() != decoderBuilder.lifecycleManager()) {
- throw new IllegalArgumentException("Encoder and Decoder must share a lifecycle manager");
- } else if (encoderBuilder.lifecycleManager() == null) {
- encoderBuilder.lifecycleManager(this);
- decoderBuilder.lifecycleManager(this);
- }
-
- // Build the encoder.
- encoder = checkNotNull(encoderBuilder.build(), "encoder");
-
- // Build the decoder.
- decoderBuilder.encoder(encoder);
- decoder = checkNotNull(decoderBuilder.build(), "decoder");
-
- // Verify that the encoder and decoder use the same connection.
- checkNotNull(encoder.connection(), "encoder.connection");
+ public Http2ConnectionHandler(Http2ConnectionDecoder decoder,
+ Http2ConnectionEncoder encoder) {
+ this.decoder = checkNotNull(decoder, "decoder");
+ this.encoder = checkNotNull(encoder, "encoder");
if (encoder.connection() != decoder.connection()) {
throw new IllegalArgumentException("Encoder and Decoder do not share the same connection object");
}
@@ -301,6 +282,9 @@ public void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) thro
@Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+ // Initialize the encoder and decoder.
+ encoder.lifecycleManager(this);
+ decoder.lifecycleManager(this);
byteDecoder = new PrefaceDecoder(ctx);
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java
index 1633f170b4a..ca3676617d8 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/HttpToHttp2ConnectionHandler.java
@@ -40,9 +40,9 @@ public HttpToHttp2ConnectionHandler(Http2Connection connection, Http2FrameReader
super(connection, frameReader, frameWriter, listener);
}
- public HttpToHttp2ConnectionHandler(Http2ConnectionDecoder.Builder decoderBuilder,
- Http2ConnectionEncoder.Builder encoderBuilder) {
- super(decoderBuilder, encoderBuilder);
+ public HttpToHttp2ConnectionHandler(Http2ConnectionDecoder decoder,
+ Http2ConnectionEncoder encoder) {
+ super(decoder, encoder);
}
/**
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
index 66effe1cc3a..633ad61d1e5 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DataCompressionHttp2Test.java
@@ -216,7 +216,7 @@ public void run() {
});
awaitServer();
assertEquals(0, serverConnection.local().flowController().unconsumedBytes(stream));
- assertEquals(new StringBuilder(text1).append(text2).toString(),
+ assertEquals(text1 + text2,
serverOut.toString(CharsetUtil.UTF_8.name()));
} finally {
data1.release();
@@ -296,16 +296,13 @@ public Integer answer(InvocationOnMock in) throws Throwable {
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
- Http2FrameWriter writer = new DefaultHttp2FrameWriter();
- Http2ConnectionHandler connectionHandler =
- new Http2ConnectionHandler(new DefaultHttp2ConnectionDecoder.Builder()
- .connection(serverConnection)
- .frameReader(new DefaultHttp2FrameReader())
- .listener(
- new DelegatingDecompressorFrameListener(serverConnection,
- serverListener)),
- new CompressorHttp2ConnectionEncoder.Builder().connection(
- serverConnection).frameWriter(writer));
+ Http2ConnectionEncoder encoder = new CompressorHttp2ConnectionEncoder(
+ new DefaultHttp2ConnectionEncoder(serverConnection, new DefaultHttp2FrameWriter()));
+ Http2ConnectionDecoder decoder =
+ new DefaultHttp2ConnectionDecoder(serverConnection, encoder, new DefaultHttp2FrameReader(),
+ new DelegatingDecompressorFrameListener(serverConnection,
+ serverListener));
+ Http2ConnectionHandler connectionHandler = new Http2ConnectionHandler(decoder, encoder);
p.addLast(connectionHandler);
serverChannelLatch.countDown();
}
@@ -319,17 +316,14 @@ protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
FrameCountDown clientFrameCountDown = new FrameCountDown(clientListener,
clientSettingsAckLatch, clientLatch);
- Http2FrameWriter writer = new DefaultHttp2FrameWriter();
- Http2ConnectionHandler connectionHandler =
- new Http2ConnectionHandler(new DefaultHttp2ConnectionDecoder.Builder()
- .connection(clientConnection)
- .frameReader(new DefaultHttp2FrameReader())
- .listener(
- new DelegatingDecompressorFrameListener(clientConnection,
- clientFrameCountDown)),
- new CompressorHttp2ConnectionEncoder.Builder().connection(
- clientConnection).frameWriter(writer));
- clientEncoder = connectionHandler.encoder();
+ clientEncoder = new CompressorHttp2ConnectionEncoder(
+ new DefaultHttp2ConnectionEncoder(clientConnection, new DefaultHttp2FrameWriter()));
+ Http2ConnectionDecoder decoder =
+ new DefaultHttp2ConnectionDecoder(clientConnection, clientEncoder,
+ new DefaultHttp2FrameReader(),
+ new DelegatingDecompressorFrameListener(clientConnection,
+ clientFrameCountDown));
+ Http2ConnectionHandler connectionHandler = new Http2ConnectionHandler(decoder, clientEncoder);
p.addLast(connectionHandler);
}
});
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index 53a03ed7648..5056c31bac5 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -139,9 +139,8 @@ public void setup() throws Exception {
when(ctx.newPromise()).thenReturn(promise);
when(ctx.write(any())).thenReturn(future);
- decoder = DefaultHttp2ConnectionDecoder.newBuilder().connection(connection)
- .frameReader(reader).encoder(encoder)
- .listener(listener).lifecycleManager(lifecycleManager).build();
+ decoder = new DefaultHttp2ConnectionDecoder(connection, encoder, reader, listener);
+ decoder.lifecycleManager(lifecycleManager);
// Simulate receiving the initial settings from the remote endpoint.
decode().onSettingsRead(ctx, new Http2Settings());
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
index 5a212696052..3bcd70e0b93 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionEncoderTest.java
@@ -195,8 +195,8 @@ public Object answer(InvocationOnMock invocationOnMock) throws Throwable {
when(ctx.newPromise()).thenReturn(promise);
when(ctx.write(any())).thenReturn(future);
- encoder = DefaultHttp2ConnectionEncoder.newBuilder().connection(connection)
- .frameWriter(writer).lifecycleManager(lifecycleManager).build();
+ encoder = new DefaultHttp2ConnectionEncoder(connection, writer);
+ encoder.lifecycleManager(lifecycleManager);
}
@Test
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
index f216c7b89e1..a92d95dd34c 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
@@ -89,12 +89,6 @@ public class Http2ConnectionHandlerTest {
@Mock
private Http2Stream stream;
- @Mock
- private Http2ConnectionDecoder.Builder decoderBuilder;
-
- @Mock
- private Http2ConnectionEncoder.Builder encoderBuilder;
-
@Mock
private Http2ConnectionDecoder decoder;
@@ -110,8 +104,6 @@ public void setup() throws Exception {
promise = new DefaultChannelPromise(channel);
- when(encoderBuilder.build()).thenReturn(encoder);
- when(decoderBuilder.build()).thenReturn(decoder);
when(encoder.connection()).thenReturn(connection);
when(decoder.connection()).thenReturn(connection);
when(encoder.frameWriter()).thenReturn(frameWriter);
@@ -132,7 +124,7 @@ public void setup() throws Exception {
}
private Http2ConnectionHandler newHandler() throws Exception {
- Http2ConnectionHandler handler = new Http2ConnectionHandler(decoderBuilder, encoderBuilder);
+ Http2ConnectionHandler handler = new Http2ConnectionHandler(decoder, encoder);
handler.handlerAdded(ctx);
return handler;
}
diff --git a/microbench/src/test/java/io/netty/microbench/http2/Http2FrameWriterBenchmark.java b/microbench/src/test/java/io/netty/microbench/http2/Http2FrameWriterBenchmark.java
index 91c513346b0..9d8faa16f50 100644
--- a/microbench/src/test/java/io/netty/microbench/http2/Http2FrameWriterBenchmark.java
+++ b/microbench/src/test/java/io/netty/microbench/http2/Http2FrameWriterBenchmark.java
@@ -52,6 +52,8 @@
import io.netty.handler.codec.http2.DefaultHttp2FrameWriter;
import io.netty.handler.codec.http2.DefaultHttp2Headers;
import io.netty.handler.codec.http2.Http2Connection;
+import io.netty.handler.codec.http2.Http2ConnectionDecoder;
+import io.netty.handler.codec.http2.Http2ConnectionEncoder;
import io.netty.handler.codec.http2.Http2ConnectionHandler;
import io.netty.handler.codec.http2.Http2FrameAdapter;
import io.netty.handler.codec.http2.Http2FrameWriter;
@@ -264,11 +266,11 @@ protected void initChannel(Channel ch) throws Exception {
connection.local().flowController(localFlowController);
}
environment.writer(new DefaultHttp2FrameWriter());
- Http2ConnectionHandler connectionHandler = new Http2ConnectionHandler(
- new DefaultHttp2ConnectionDecoder.Builder().connection(connection)
- .frameReader(new DefaultHttp2FrameReader()).listener(new Http2FrameAdapter()),
- new DefaultHttp2ConnectionEncoder.Builder().connection(connection).frameWriter(
- environment.writer()));
+ Http2ConnectionEncoder encoder = new DefaultHttp2ConnectionEncoder(connection, environment.writer());
+ Http2ConnectionDecoder decoder =
+ new DefaultHttp2ConnectionDecoder(connection, encoder, new DefaultHttp2FrameReader(),
+ new Http2FrameAdapter());
+ Http2ConnectionHandler connectionHandler = new Http2ConnectionHandler(decoder, encoder);
p.addLast(connectionHandler);
environment.context(p.lastContext());
}
@@ -283,10 +285,11 @@ protected void initChannel(Channel ch) throws Exception {
private static Environment boostrapEmbeddedEnv(final ByteBufAllocator alloc) {
final EmbeddedEnvironment env = new EmbeddedEnvironment(new DefaultHttp2FrameWriter());
final Http2Connection connection = new DefaultHttp2Connection(false);
- final Http2ConnectionHandler connectionHandler = new Http2ConnectionHandler(
- new DefaultHttp2ConnectionDecoder.Builder().connection(connection)
- .frameReader(new DefaultHttp2FrameReader()).listener(new Http2FrameAdapter()),
- new DefaultHttp2ConnectionEncoder.Builder().connection(connection).frameWriter(env.writer()));
+ Http2ConnectionEncoder encoder = new DefaultHttp2ConnectionEncoder(connection, env.writer());
+ Http2ConnectionDecoder decoder =
+ new DefaultHttp2ConnectionDecoder(connection, encoder, new DefaultHttp2FrameReader(),
+ new Http2FrameAdapter());
+ Http2ConnectionHandler connectionHandler = new Http2ConnectionHandler(decoder, encoder);
env.context(new EmbeddedChannelWriteReleaseHandlerContext(alloc, connectionHandler) {
@Override
protected void handleException(Throwable t) {
| val | train | 2015-03-30T18:54:33 | 2015-03-24T18:31:40Z | nmittler | val |
netty/netty/3560_3563 | netty/netty | netty/netty/3560 | netty/netty/3563 | [
"timestamp(timedelta=57.0, similarity=0.9322689013154061)"
] | 2b3e0e675a62c8dfaaf735d4b57df17a45a46422 | ddece6dd2de094d33da3c1cd6daec016287402da | [] | [
"@nmittler why you changed this to public ? Was this by mistake ?\n",
"@nmittler - Correct me if I'm wrong but I'll take a shot...\n\n@normanmaurer - I think because before this change we only allowed access to the \"standard\" set of settings (only 6 of them) via the other public methods in this class (i.e. `max... | 2015-04-01T14:55:50Z | [
"defect"
] | HTTP/2 disallows non-standard settings | Currently the `Http2Settings` class disallows setting non-standard settings, which goes against the spec.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java",
"common/src/main/java/io/netty/util/collection/IntObjectHashMap.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java",
"common/src/main/java/io/netty/util/collection/IntObjectHashMap.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index 308ac81ea6c..88bdd859ba8 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -69,6 +69,7 @@ public final class Http2CodecUtil {
public static final int SETTINGS_INITIAL_WINDOW_SIZE = 4;
public static final int SETTINGS_MAX_FRAME_SIZE = 5;
public static final int SETTINGS_MAX_HEADER_LIST_SIZE = 6;
+ public static final int NUM_STANDARD_SETTINGS = 6;
public static final int MAX_HEADER_TABLE_SIZE = Integer.MAX_VALUE; // Size limited by HPACK library
public static final long MAX_CONCURRENT_STREAMS = MAX_UNSIGNED_INT;
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java
index 574145dc5e8..c2fd28787da 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Settings.java
@@ -23,6 +23,7 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_CONCURRENT_STREAMS;
import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_INITIAL_WINDOW_SIZE;
import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_HEADER_LIST_SIZE;
+import static io.netty.handler.codec.http2.Http2CodecUtil.NUM_STANDARD_SETTINGS;
import static io.netty.handler.codec.http2.Http2CodecUtil.SETTINGS_ENABLE_PUSH;
import static io.netty.handler.codec.http2.Http2CodecUtil.SETTINGS_HEADER_TABLE_SIZE;
import static io.netty.handler.codec.http2.Http2CodecUtil.SETTINGS_INITIAL_WINDOW_SIZE;
@@ -39,9 +40,14 @@
* methods for standard settings.
*/
public final class Http2Settings extends IntObjectHashMap<Long> {
+ /**
+ * Default capacity based on the number of standard settings from the HTTP/2 spec, adjusted so that adding all of
+ * the standard settings will not cause the map capacity to change.
+ */
+ private static final int DEFAULT_CAPACITY = (int) (NUM_STANDARD_SETTINGS / DEFAULT_LOAD_FACTOR) + 1;
public Http2Settings() {
- this(6 /* number of standard settings */);
+ this(DEFAULT_CAPACITY);
}
public Http2Settings(int initialCapacity, float loadFactor) {
@@ -53,9 +59,10 @@ public Http2Settings(int initialCapacity) {
}
/**
- * Overrides the superclass method to perform verification of standard HTTP/2 settings.
+ * Adds the given setting key/value pair. For standard settings defined by the HTTP/2 spec, performs
+ * validation on the values.
*
- * @throws IllegalArgumentException if verification of the setting fails.
+ * @throws IllegalArgumentException if verification for a standard HTTP/2 setting fails.
*/
@Override
public Long put(int key, Long value) {
@@ -176,7 +183,12 @@ public Http2Settings copyFrom(Http2Settings settings) {
return this;
}
- Integer getIntValue(int key) {
+ /**
+ * A helper method that returns {@link Long#intValue()} on the return of {@link #get(int)}, if present. Note that
+ * if the range of the value exceeds {@link Integer#MAX_VALUE}, the {@link #get(int)} method should
+ * be used instead to avoid truncation of the value.
+ */
+ public Integer getIntValue(int key) {
Long value = get(key);
if (value == null) {
return null;
@@ -220,7 +232,8 @@ private static void verifyStandardSetting(int key, Long value) {
}
break;
default:
- throw new IllegalArgumentException("key");
+ // Non-standard HTTP/2 setting - don't do validation.
+ break;
}
}
diff --git a/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java b/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java
index 0ae4a8be753..f01e3093da5 100644
--- a/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java
+++ b/common/src/main/java/io/netty/util/collection/IntObjectHashMap.java
@@ -33,10 +33,10 @@
public class IntObjectHashMap<V> implements IntObjectMap<V>, Iterable<IntObjectMap.Entry<V>> {
/** Default initial capacity. Used if not specified in the constructor */
- private static final int DEFAULT_CAPACITY = 11;
+ public static final int DEFAULT_CAPACITY = 11;
/** Default load factor. Used if not specified in the constructor */
- private static final float DEFAULT_LOAD_FACTOR = 0.5f;
+ public static final float DEFAULT_LOAD_FACTOR = 0.5f;
/**
* Placeholder for null values, so we can use the actual null to mean available.
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java
index 38e8d2ce7fd..06c25d7fa84 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2SettingsTest.java
@@ -61,4 +61,10 @@ public void standardSettingsShouldBeSet() {
assertEquals(MAX_FRAME_SIZE_UPPER_BOUND, (int) settings.maxFrameSize());
assertEquals(4L, (long) settings.maxHeaderListSize());
}
+
+ @Test
+ public void nonStandardSettingsShouldBeSet() {
+ settings.put(0, 123L);
+ assertEquals(123L, (long) settings.get(0));
+ }
}
| train | train | 2015-04-01T01:25:00 | 2015-04-01T14:18:14Z | nmittler | val |
netty/netty/3573_3581 | netty/netty | netty/netty/3573 | netty/netty/3581 | [
"timestamp(timedelta=25.0, similarity=0.943774633710981)"
] | e40c27d9ed1678207c1ca4554e70ed5e2534d93f | cdd35c2efd112e130312535fcbf69eb341fc14d3 | [
"PR pending for this....\n"
] | [] | 2015-04-03T21:39:26Z | [
"defect"
] | HTTP/2 RST_STREAM frame for an IDLE stream should result in connection error | http://http2.github.io/http2-spec/index.html#RST_STREAM
> RST_STREAM frames MUST NOT be sent for a stream in the "idle" state. If a RST_STREAM frame identifying an idle stream is received, the recipient MUST treat this as a connection error (Section 5.4.1) of type PROTOCOL_ERROR.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index f839e6d0390..714710a5722 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -349,9 +349,13 @@ public void onPriorityRead(ChannelHandlerContext ctx, int streamId, int streamDe
@Override
public void onRstStreamRead(ChannelHandlerContext ctx, int streamId, long errorCode) throws Http2Exception {
Http2Stream stream = connection.requireStream(streamId);
- if (stream.state() == CLOSED) {
- // RstStream frames must be ignored for closed streams.
- return;
+ switch(stream.state()) {
+ case IDLE:
+ throw connectionError(PROTOCOL_ERROR, "RST_STREAM received for IDLE stream %d", streamId);
+ case CLOSED:
+ return; // RST_STREAM frames must be ignored for closed streams.
+ default:
+ break;
}
listener.onRstStreamRead(ctx, streamId, errorCode);
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index 82319597145..c119dc94cd6 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -20,6 +20,7 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.emptyPingBuf;
import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
import static io.netty.handler.codec.http2.Http2Exception.connectionError;
+import static io.netty.handler.codec.http2.Http2Stream.State.IDLE;
import static io.netty.handler.codec.http2.Http2Stream.State.OPEN;
import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_REMOTE;
import static io.netty.util.CharsetUtil.UTF_8;
@@ -455,6 +456,14 @@ public void rstStreamReadShouldCloseStream() throws Exception {
verify(listener).onRstStreamRead(eq(ctx), eq(STREAM_ID), eq(PROTOCOL_ERROR.code()));
}
+ @Test(expected = Http2Exception.class)
+ public void rstStreamOnIdleStreamShouldThrow() throws Exception {
+ when(stream.state()).thenReturn(IDLE);
+ decode().onRstStreamRead(ctx, STREAM_ID, PROTOCOL_ERROR.code());
+ verify(lifecycleManager).closeStream(eq(stream), eq(future));
+ verify(listener, never()).onRstStreamRead(any(ChannelHandlerContext.class), anyInt(), anyLong());
+ }
+
@Test
public void pingReadWithAckShouldNotifylistener() throws Exception {
decode().onPingAckRead(ctx, emptyPingBuf());
| test | train | 2015-04-03T20:57:31 | 2015-04-03T16:11:15Z | Scottmitch | val |
netty/netty/3572_3583 | netty/netty | netty/netty/3572 | netty/netty/3583 | [
"timestamp(timedelta=20.0, similarity=0.9411251524949779)"
] | e40c27d9ed1678207c1ca4554e70ed5e2534d93f | af2352c9ac4b9ce52811dd47bcc435879d73e55e | [
"The behavior should be the same as the PRIORITY frame. We should create the stream. So we need to move the set priority above the notification.\n"
] | [] | 2015-04-03T21:44:04Z | [
"defect"
] | HTTP/2 HEADERS stream dependency stream state | The specification does not provide explicit direction as how to what state the `Stream Dependency` stream is expected to be in, and how implementations should react if it is not in the OPEN (or other "created" states). The decoder may have to be updated by:
1. Move the `setPriority` above notification of the listener.
2. Potentially check the stream state of the `Stream Dependency` stream before accepting the frame.
https://github.com/http2/http2-spec/issues/739
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
index f839e6d0390..06eeced4aa1 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoder.java
@@ -307,11 +307,18 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId, Http2Headers
}
}
+ try {
+ // This call will create a stream for streamDependency if necessary.
+ // For this reason it must be done before notifying the listener.
+ stream.setPriority(streamDependency, weight, exclusive);
+ } catch (ClosedStreamCreationException ignored) {
+ // It is possible that either the stream for this frame or the parent stream is closed.
+ // In this case we should ignore the exception and allow the frame to be sent.
+ }
+
listener.onHeadersRead(ctx, streamId, headers,
streamDependency, weight, exclusive, padding, endOfStream);
- stream.setPriority(streamDependency, weight, exclusive);
-
// If the headers completes this stream, close it.
if (endOfStream) {
lifecycleManager.closeRemoteSide(stream, ctx.newSucceededFuture());
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
index 82319597145..0591363d8d5 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionDecoderTest.java
@@ -64,6 +64,7 @@
public class DefaultHttp2ConnectionDecoderTest {
private static final int STREAM_ID = 1;
private static final int PUSH_STREAM_ID = 2;
+ private static final int STREAM_DEPENDENCY_ID = 3;
private Http2ConnectionDecoder decoder;
@@ -344,6 +345,51 @@ public void headersReadForPromisedStreamShouldCloseStream() throws Exception {
eq(DEFAULT_PRIORITY_WEIGHT), eq(false), eq(0), eq(true));
}
+ @Test
+ public void headersDependencyNotCreatedShouldCreateAndSucceed() throws Exception {
+ final short weight = 1;
+ decode().onHeadersRead(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, STREAM_DEPENDENCY_ID,
+ weight, true, 0, true);
+ verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(EmptyHttp2Headers.INSTANCE), eq(STREAM_DEPENDENCY_ID),
+ eq(weight), eq(true), eq(0), eq(true));
+ verify(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq(weight), eq(true));
+ verify(lifecycleManager).closeRemoteSide(eq(stream), any(ChannelFuture.class));
+ }
+
+ @Test
+ public void headersDependencyPreviouslyCreatedStreamShouldSucceed() throws Exception {
+ final short weight = 1;
+ doAnswer(new Answer<Http2Stream>() {
+ @Override
+ public Http2Stream answer(InvocationOnMock in) throws Throwable {
+ throw new ClosedStreamCreationException(Http2Error.INTERNAL_ERROR);
+ }
+ }).when(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq(weight), eq(true));
+ decode().onHeadersRead(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, STREAM_DEPENDENCY_ID,
+ weight, true, 0, true);
+ verify(listener).onHeadersRead(eq(ctx), eq(STREAM_ID), eq(EmptyHttp2Headers.INSTANCE), eq(STREAM_DEPENDENCY_ID),
+ eq(weight), eq(true), eq(0), eq(true));
+ verify(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq(weight), eq(true));
+ verify(lifecycleManager).closeRemoteSide(eq(stream), any(ChannelFuture.class));
+ }
+
+ @Test(expected = RuntimeException.class)
+ public void headersDependencyInvalidStreamShouldFail() throws Exception {
+ final short weight = 1;
+ doAnswer(new Answer<Http2Stream>() {
+ @Override
+ public Http2Stream answer(InvocationOnMock in) throws Throwable {
+ throw new RuntimeException("Fake Exception");
+ }
+ }).when(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq(weight), eq(true));
+ decode().onHeadersRead(ctx, STREAM_ID, EmptyHttp2Headers.INSTANCE, STREAM_DEPENDENCY_ID,
+ weight, true, 0, true);
+ verify(listener, never()).onHeadersRead(any(ChannelHandlerContext.class), anyInt(), any(Http2Headers.class),
+ anyInt(), anyShort(), anyBoolean(), anyInt(), anyBoolean());
+ verify(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq(weight), eq(true));
+ verify(lifecycleManager, never()).closeRemoteSide(eq(stream), any(ChannelFuture.class));
+ }
+
@Test
public void pushPromiseReadAfterGoAwayShouldBeIgnored() throws Exception {
when(connection.goAwaySent()).thenReturn(true);
@@ -372,9 +418,9 @@ public void priorityReadAfterGoAwayShouldBeIgnored() throws Exception {
public void priorityReadShouldSucceed() throws Exception {
when(connection.stream(STREAM_ID)).thenReturn(null);
when(connection.requireStream(STREAM_ID)).thenReturn(null);
- decode().onPriorityRead(ctx, STREAM_ID, 0, (short) 255, true);
- verify(stream).setPriority(eq(0), eq((short) 255), eq(true));
- verify(listener).onPriorityRead(eq(ctx), eq(STREAM_ID), eq(0), eq((short) 255), eq(true));
+ decode().onPriorityRead(ctx, STREAM_ID, STREAM_DEPENDENCY_ID, (short) 255, true);
+ verify(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq((short) 255), eq(true));
+ verify(listener).onPriorityRead(eq(ctx), eq(STREAM_ID), eq(STREAM_DEPENDENCY_ID), eq((short) 255), eq(true));
verify(remote).createStream(STREAM_ID);
verify(stream, never()).open(anyBoolean());
}
@@ -390,11 +436,11 @@ public Http2Stream answer(InvocationOnMock in) throws Throwable {
when(connection.stream(STREAM_ID)).thenReturn(null);
when(connection.requireStream(STREAM_ID)).thenReturn(null);
// Just return the stream object as the connection stream to ensure the dependent stream "exists"
- when(connection.stream(0)).thenReturn(stream);
- when(connection.requireStream(0)).thenReturn(stream);
- decode().onPriorityRead(ctx, STREAM_ID, 0, (short) 255, true);
+ when(connection.stream(STREAM_DEPENDENCY_ID)).thenReturn(stream);
+ when(connection.requireStream(STREAM_DEPENDENCY_ID)).thenReturn(stream);
+ decode().onPriorityRead(ctx, STREAM_ID, STREAM_DEPENDENCY_ID, (short) 255, true);
verify(stream, never()).setPriority(anyInt(), anyShort(), anyBoolean());
- verify(listener).onPriorityRead(eq(ctx), eq(STREAM_ID), eq(0), eq((short) 255), eq(true));
+ verify(listener).onPriorityRead(eq(ctx), eq(STREAM_ID), eq(STREAM_DEPENDENCY_ID), eq((short) 255), eq(true));
verify(remote).createStream(STREAM_ID);
}
@@ -405,12 +451,12 @@ public void priorityReadOnPreviouslyParentExistingStreamShouldSucceed() throws E
public Http2Stream answer(InvocationOnMock in) throws Throwable {
throw new ClosedStreamCreationException(Http2Error.INTERNAL_ERROR);
}
- }).when(stream).setPriority(eq(0), eq((short) 255), eq(true));
+ }).when(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq((short) 255), eq(true));
when(connection.stream(STREAM_ID)).thenReturn(stream);
when(connection.requireStream(STREAM_ID)).thenReturn(stream);
- decode().onPriorityRead(ctx, STREAM_ID, 0, (short) 255, true);
- verify(stream).setPriority(eq(0), eq((short) 255), eq(true));
- verify(listener).onPriorityRead(eq(ctx), eq(STREAM_ID), eq(0), eq((short) 255), eq(true));
+ decode().onPriorityRead(ctx, STREAM_ID, STREAM_DEPENDENCY_ID, (short) 255, true);
+ verify(stream).setPriority(eq(STREAM_DEPENDENCY_ID), eq((short) 255), eq(true));
+ verify(listener).onPriorityRead(eq(ctx), eq(STREAM_ID), eq(STREAM_DEPENDENCY_ID), eq((short) 255), eq(true));
}
@Test
| train | train | 2015-04-03T20:57:31 | 2015-04-03T16:06:40Z | Scottmitch | val |
netty/netty/3448_3606 | netty/netty | netty/netty/3448 | netty/netty/3606 | [
"timestamp(timedelta=154.0, similarity=0.953447460037735)"
] | d556269810264870bbcb6c1aaa9a94a9d78c494c | dc36a08da874a184d82378bc4876b63f1d5bf9ff | [
"@Scottmitch WDYT?\n",
"We ignore these frames for non-existent streams if they are for stream Ids below the lastGoodStreamId\n",
"That seems like a reasonable default policy. \n\nI guess the question here is whether we need to continue to allow for a pluggable policy or just bake this behavior in. \n",
"@jpi... | [
"Kind of strange to say \"Default implementation\" as this is not an implementation of an interface.\n",
"@nmittler I think this comment is not right. Streams should be removed even if they have children, but the children should be reprioritised and get a new parent?\n",
"@buchgr I believe the comment accuratel... | 2015-04-10T16:36:22Z | [
"improvement"
] | Get rid of Http2StreamRemovalPolicy | The `DefaultHttp2StreamRemovalPolicy` by default waits 5 seconds after a stream has been closed before removing it from the connection. The reason for this is to support the following case from the HTTP2 spec (From [Section 5.1](https://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-5.1)):
```
"WINDOW_UPDATE or RST_STREAM frames can be received in this state
for a short period after a DATA or HEADERS frame containing an
END_STREAM flag is sent. Until the remote peer receives and
processes RST_STREAM or the frame bearing the END_STREAM flag, it
might send frames of these types. Endpoints MUST ignore
WINDOW_UPDATE or RST_STREAM frames received in this state, though
endpoints MAY choose to treat frames that arrive a significant
time after sending END_STREAM as a connection error
of type PROTOCOL_ERROR."
```
These cases don't really seem to warrant the extra complexity that DefaultHttp2StreamRemovalPolicy introduces. It would be better to just always ignore WINDOW_UPDATE and RST_STREAM for non-existent streams. This way we can always remove the streams as soon as they're closed to avoid potential memory leaks.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2StreamRemovalPolicy.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java",
"codec-http2/src/main/java/io/netty/handler/codec/h... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java"
] | [] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index a83436b6493..7c2fe828643 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -19,7 +19,6 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT;
import static io.netty.handler.codec.http2.Http2CodecUtil.MAX_WEIGHT;
import static io.netty.handler.codec.http2.Http2CodecUtil.MIN_WEIGHT;
-import static io.netty.handler.codec.http2.Http2CodecUtil.immediateRemovalPolicy;
import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
import static io.netty.handler.codec.http2.Http2Error.REFUSED_STREAM;
import static io.netty.handler.codec.http2.Http2Exception.closedStreamError;
@@ -35,7 +34,6 @@
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
-import io.netty.handler.codec.http2.Http2StreamRemovalPolicy.Action;
import io.netty.util.collection.IntObjectHashMap;
import io.netty.util.collection.IntObjectMap;
import io.netty.util.collection.PrimitiveCollections;
@@ -70,37 +68,17 @@ public class DefaultHttp2Connection implements Http2Connection {
final List<Listener> listeners = new ArrayList<Listener>(4);
final ActiveStreams activeStreams;
- /**
- * Creates a connection with an immediate stream removal policy.
- *
- * @param server
- * whether or not this end-point is the server-side of the HTTP/2 connection.
- */
- public DefaultHttp2Connection(boolean server) {
- this(server, immediateRemovalPolicy());
- }
-
/**
* Creates a new connection with the given settings.
*
* @param server
* whether or not this end-point is the server-side of the HTTP/2 connection.
- * @param removalPolicy
- * the policy to be used for removal of closed stream.
*/
- public DefaultHttp2Connection(boolean server, Http2StreamRemovalPolicy removalPolicy) {
- activeStreams = new ActiveStreams(listeners, checkNotNull(removalPolicy, "removalPolicy"));
+ public DefaultHttp2Connection(boolean server) {
+ activeStreams = new ActiveStreams(listeners);
localEndpoint = new DefaultEndpoint<Http2LocalFlowController>(server);
remoteEndpoint = new DefaultEndpoint<Http2RemoteFlowController>(!server);
- // Tell the removal policy how to remove a stream from this connection.
- removalPolicy.setAction(new Action() {
- @Override
- public void removeStream(Http2Stream stream) {
- DefaultHttp2Connection.this.removeStream((DefaultStream) stream);
- }
- });
-
// Add the connection stream to the map.
streamMap.put(connectionStream.id(), connectionStream);
}
@@ -997,31 +975,32 @@ private boolean isLocal() {
}
/**
- * Default implementation of the {@link ActiveStreams} class.
+ * Allows events which would modify the collection of active streams to be queued while iterating via {@link
+ * #forEachActiveStream(Http2StreamVisitor)}.
*/
- private static final class ActiveStreams {
+ interface Event {
/**
- * Allows events which would modify {@link #streams} to be queued while iterating over {@link #streams}.
+ * Trigger the original intention of this event. Expect to modify the active streams list.
+ * <p/>
+ * If a {@link RuntimeException} object is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
*/
- interface Event {
- /**
- * Trigger the original intention of this event. Expect to modify {@link #streams}.
- * <p>
- * If a {@link RuntimeException} object is thrown it will be logged and <strong>not propagated</strong>.
- * Throwing from this method is not supported and is considered a programming error.
- */
- void process();
- }
+ void process();
+ }
+
+ /**
+ * Manages the list of currently active streams. Queues any {@link Event}s that would modify the list of
+ * active streams in order to prevent modification while iterating.
+ */
+ private final class ActiveStreams {
private final List<Listener> listeners;
- private final Http2StreamRemovalPolicy removalPolicy;
private final Queue<Event> pendingEvents = new ArrayDeque<Event>(4);
private final Set<Http2Stream> streams = new LinkedHashSet<Http2Stream>();
private int pendingIterations;
- public ActiveStreams(List<Listener> listeners, Http2StreamRemovalPolicy removalPolicy) {
+ public ActiveStreams(List<Listener> listeners) {
this.listeners = listeners;
- this.removalPolicy = removalPolicy;
}
public int size() {
@@ -1110,8 +1089,7 @@ void removeFromActiveStreams(DefaultStream stream) {
}
}
} finally {
- // Mark this stream for removal.
- removalPolicy.markForRemoval(stream);
+ removeStream(stream);
}
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2StreamRemovalPolicy.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2StreamRemovalPolicy.java
deleted file mode 100644
index 550bf4c27a3..00000000000
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2StreamRemovalPolicy.java
+++ /dev/null
@@ -1,106 +0,0 @@
-/*
- * Copyright 2014 The Netty Project
- *
- * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at:
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package io.netty.handler.codec.http2;
-
-import static java.util.concurrent.TimeUnit.NANOSECONDS;
-import static java.util.concurrent.TimeUnit.SECONDS;
-import io.netty.channel.ChannelHandlerAdapter;
-import io.netty.channel.ChannelHandlerContext;
-
-import java.util.ArrayDeque;
-import java.util.Queue;
-import java.util.concurrent.ScheduledFuture;
-
-/**
- * A {@link Http2StreamRemovalPolicy} that periodically runs garbage collection on streams that have
- * been marked for removal.
- */
-public class DefaultHttp2StreamRemovalPolicy extends ChannelHandlerAdapter implements
- Http2StreamRemovalPolicy, Runnable {
-
- /**
- * The interval (in ns) at which the removed priority garbage collector runs.
- */
- private static final long GARBAGE_COLLECTION_INTERVAL = SECONDS.toNanos(5);
-
- private final Queue<Garbage> garbage = new ArrayDeque<Garbage>();
- private ScheduledFuture<?> timerFuture;
- private Action action;
-
- @Override
- public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
- // Schedule the periodic timer for performing the policy check.
- timerFuture = ctx.channel().eventLoop().scheduleWithFixedDelay(this,
- GARBAGE_COLLECTION_INTERVAL,
- GARBAGE_COLLECTION_INTERVAL,
- NANOSECONDS);
- }
-
- @Override
- public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
- // Cancel the periodic timer.
- if (timerFuture != null) {
- timerFuture.cancel(false);
- timerFuture = null;
- }
- }
-
- @Override
- public void setAction(Action action) {
- this.action = action;
- }
-
- @Override
- public void markForRemoval(Http2Stream stream) {
- garbage.add(new Garbage(stream));
- }
-
- /**
- * Runs garbage collection of any streams marked for removal >
- * {@link #GARBAGE_COLLECTION_INTERVAL} nanoseconds ago.
- */
- @Override
- public void run() {
- if (garbage.isEmpty() || action == null) {
- return;
- }
-
- long time = System.nanoTime();
- for (;;) {
- Garbage next = garbage.peek();
- if (next == null) {
- break;
- }
- if (time - next.removalTime > GARBAGE_COLLECTION_INTERVAL) {
- garbage.remove();
- action.removeStream(next.stream);
- } else {
- break;
- }
- }
- }
-
- /**
- * Wrapper around a stream and its removal time.
- */
- private static final class Garbage {
- private final long removalTime = System.nanoTime();
- private final Http2Stream stream;
-
- Garbage(Http2Stream stream) {
- this.stream = stream;
- }
- }
-}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
index 19b27f59382..87735a53268 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2CodecUtil.java
@@ -16,14 +16,13 @@
package io.netty.handler.codec.http2;
import static io.netty.util.CharsetUtil.UTF_8;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
-import io.netty.handler.codec.http2.Http2StreamRemovalPolicy.Action;
import io.netty.util.concurrent.EventExecutor;
/**
@@ -113,30 +112,6 @@ public static ByteBuf emptyPingBuf() {
return Unpooled.wrappedBuffer(EMPTY_PING);
}
- /**
- * Returns a simple {@link Http2StreamRemovalPolicy} that immediately calls back the
- * {@link Action} when a stream is marked for removal.
- */
- public static Http2StreamRemovalPolicy immediateRemovalPolicy() {
- return new Http2StreamRemovalPolicy() {
- private Action action;
-
- @Override
- public void setAction(Action action) {
- this.action = checkNotNull(action, "action");
- }
-
- @Override
- public void markForRemoval(Http2Stream stream) {
- if (action == null) {
- throw new IllegalStateException(
- "Action must be called before removing streams.");
- }
- action.removeStream(stream);
- }
- };
- }
-
/**
* Iteratively looks through the causaility chain for the given exception and returns the first
* {@link Http2Exception} or {@code null} if none.
@@ -321,15 +296,5 @@ public boolean trySuccess(Void result) {
}
}
- /**
- * Fails the given promise with the cause and then re-throws the cause.
- */
- public static <T extends Throwable> T failAndThrow(ChannelPromise promise, T cause) throws T {
- if (!promise.isDone()) {
- promise.setFailure(cause);
- }
- throw cause;
- }
-
private Http2CodecUtil() { }
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamRemovalPolicy.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamRemovalPolicy.java
deleted file mode 100644
index d8e57dd7f79..00000000000
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2StreamRemovalPolicy.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/*
- * Copyright 2014 The Netty Project
- *
- * The Netty Project licenses this file to you under the Apache License, version 2.0 (the
- * "License"); you may not use this file except in compliance with the License. You may obtain a
- * copy of the License at:
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License
- * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
- * or implied. See the License for the specific language governing permissions and limitations under
- * the License.
- */
-package io.netty.handler.codec.http2;
-
-/**
- * A policy for determining when it is appropriate to remove streams from an HTTP/2 stream registry.
- */
-public interface Http2StreamRemovalPolicy {
-
- /**
- * Performs the action of removing the stream.
- */
- interface Action {
- /**
- * Removes the stream from the registry.
- */
- void removeStream(Http2Stream stream);
- }
-
- /**
- * Sets the removal action.
- */
- void setAction(Action action);
-
- /**
- * Marks the given stream for removal. When this policy has determined that the given stream
- * should be removed, it will call back the {@link Action}.
- */
- void markForRemoval(Http2Stream stream);
-}
| null | train | train | 2015-04-13T20:00:15 | 2015-02-24T21:30:53Z | nmittler | val |
netty/netty/3501_3615 | netty/netty | netty/netty/3501 | netty/netty/3615 | [
"timestamp(timedelta=18.0, similarity=0.8650718977788289)"
] | 4d02c3a040d47e240fd9574bb6dd48b15b9b7779 | ef9e98143e3f1c84d3bd6d52c117c86eca99da1b | [
"+1. Thanks for reporting!\n",
"@nmittler - I'll take this one. As part of the resolution to https://github.com/netty/netty/issues/3550 I needed to do additional iteration over methods which use listeners and so I had to do something about this issue. My approach was to catch `RuntimeExceptions` and only throw ... | [
"Just noticed this dependency. @nmittler - Is there no other way? :)\n",
"Nooooo! Let's just use `AtomicReference` :-)\n",
"+1 for `AtomicReference` :)\n",
"Done :)\n",
"Fixed in this PR for simplicity\n"
] | 2015-04-10T23:12:08Z | [
"defect"
] | Investigate exception handling WRT Http2Connection.Listener notifications | We should take a closer look at how we notify the listeners of connection events and consider the proper course of action if a listener callback throws.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 6b9b7059832..a83436b6493 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -40,6 +40,8 @@
import io.netty.util.collection.IntObjectMap;
import io.netty.util.collection.PrimitiveCollections;
import io.netty.util.internal.PlatformDependent;
+import io.netty.util.internal.logging.InternalLogger;
+import io.netty.util.internal.logging.InternalLoggerFactory;
import java.util.ArrayDeque;
import java.util.ArrayList;
@@ -55,6 +57,7 @@
* Simple implementation of {@link Http2Connection}.
*/
public class DefaultHttp2Connection implements Http2Connection {
+ private static final InternalLogger logger = InternalLoggerFactory.getInstance(DefaultHttp2Connection.class);
// Fields accessed by inner classes
final IntObjectMap<Http2Stream> streamMap = new IntObjectHashMap<Http2Stream>();
final ConnectionStream connectionStream = new ConnectionStream();
@@ -164,8 +167,12 @@ public boolean goAwayReceived() {
@Override
public void goAwayReceived(final int lastKnownStream, long errorCode, ByteBuf debugData) {
localEndpoint.lastKnownStream(lastKnownStream);
- for (Listener listener : listeners) {
- listener.onGoAwayReceived(lastKnownStream, errorCode, debugData);
+ for (int i = 0; i < listeners.size(); ++i) {
+ try {
+ listeners.get(i).onGoAwayReceived(lastKnownStream, errorCode, debugData);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onGoAwayReceived.", e);
+ }
}
try {
@@ -191,8 +198,12 @@ public boolean goAwaySent() {
@Override
public void goAwaySent(final int lastKnownStream, long errorCode, ByteBuf debugData) {
remoteEndpoint.lastKnownStream(lastKnownStream);
- for (Listener listener : listeners) {
- listener.onGoAwaySent(lastKnownStream, errorCode, debugData);
+ for (int i = 0; i < listeners.size(); ++i) {
+ try {
+ listeners.get(i).onGoAwaySent(lastKnownStream, errorCode, debugData);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onGoAwaySent.", e);
+ }
}
try {
@@ -226,7 +237,11 @@ void removeStream(DefaultStream stream) {
streamMap.remove(stream.id());
for (int i = 0; i < listeners.size(); i++) {
- listeners.get(i).onStreamRemoved(stream);
+ try {
+ listeners.get(i).onStreamRemoved(stream);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onStreamRemoved.", e);
+ }
}
}
}
@@ -498,7 +513,11 @@ private boolean isPrioritizable() {
private void notifyHalfClosed(Http2Stream stream) {
for (int i = 0; i < listeners.size(); i++) {
- listeners.get(i).onStreamHalfClosed(stream);
+ try {
+ listeners.get(i).onStreamHalfClosed(stream);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onStreamHalfClosed.", e);
+ }
}
}
@@ -535,7 +554,11 @@ final void weight(short weight) {
final short oldWeight = this.weight;
this.weight = weight;
for (int i = 0; i < listeners.size(); i++) {
- listeners.get(i).onWeightChanged(this, oldWeight);
+ try {
+ listeners.get(i).onWeightChanged(this, oldWeight);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onWeightChanged.", e);
+ }
}
}
}
@@ -703,7 +726,11 @@ private static final class ParentChangedEvent {
* @param l The listener to notify
*/
public void notifyListener(Listener l) {
- l.onPriorityTreeParentChanged(stream, oldParent);
+ try {
+ l.onPriorityTreeParentChanged(stream, oldParent);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onPriorityTreeParentChanged.", e);
+ }
}
}
@@ -722,7 +749,11 @@ private void notifyParentChanged(List<ParentChangedEvent> events) {
private void notifyParentChanging(Http2Stream stream, Http2Stream newParent) {
for (int i = 0; i < listeners.size(); i++) {
- listeners.get(i).onPriorityTreeParentChanging(stream, newParent);
+ try {
+ listeners.get(i).onPriorityTreeParentChanging(stream, newParent);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onPriorityTreeParentChanging.", e);
+ }
}
}
@@ -869,7 +900,11 @@ private void addStream(DefaultStream stream) {
// Notify the listeners of the event.
for (int i = 0; i < listeners.size(); i++) {
- listeners.get(i).onStreamAdded(stream);
+ try {
+ listeners.get(i).onStreamAdded(stream);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onStreamAdded.", e);
+ }
}
notifyParentChanged(events);
@@ -971,6 +1006,9 @@ private static final class ActiveStreams {
interface Event {
/**
* Trigger the original intention of this event. Expect to modify {@link #streams}.
+ * <p>
+ * If a {@link RuntimeException} object is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
*/
void process();
}
@@ -1033,7 +1071,11 @@ public Http2Stream forEachActiveStream(Http2StreamVisitor visitor) throws Http2E
if (event == null) {
break;
}
- event.process();
+ try {
+ event.process();
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException while processing pending ActiveStreams$Event.", e);
+ }
}
}
}
@@ -1045,7 +1087,11 @@ void addToActiveStreams(DefaultStream stream) {
stream.createdBy().numActiveStreams++;
for (int i = 0; i < listeners.size(); i++) {
- listeners.get(i).onStreamActive(stream);
+ try {
+ listeners.get(i).onStreamActive(stream);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onStreamActive.", e);
+ }
}
}
}
@@ -1057,7 +1103,11 @@ void removeFromActiveStreams(DefaultStream stream) {
stream.createdBy().numActiveStreams--;
for (int i = 0; i < listeners.size(); i++) {
- listeners.get(i).onStreamClosed(stream);
+ try {
+ listeners.get(i).onStreamClosed(stream);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onStreamClosed.", e);
+ }
}
} finally {
// Mark this stream for removal.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
index 80bd87379ec..07d6cf6fd6a 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2Connection.java
@@ -21,7 +21,6 @@
* Manager for the state of an HTTP/2 connection with the remote end-point.
*/
public interface Http2Connection {
-
/**
* Listener for life-cycle events for streams in this connection.
*/
@@ -29,23 +28,35 @@ interface Listener {
/**
* Notifies the listener that the given stream was added to the connection. This stream may
* not yet be active (i.e. {@code OPEN} or {@code HALF CLOSED}).
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
*/
void onStreamAdded(Http2Stream stream);
/**
* Notifies the listener that the given stream was made active (i.e. {@code OPEN} or {@code HALF CLOSED}).
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
*/
void onStreamActive(Http2Stream stream);
/**
* Notifies the listener that the given stream is now {@code HALF CLOSED}. The stream can be
* inspected to determine which side is {@code CLOSED}.
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
*/
void onStreamHalfClosed(Http2Stream stream);
/**
* Notifies the listener that the given stream is now {@code CLOSED} in both directions and will no longer
* be accessible via {@link #forEachActiveStream(Http2StreamVisitor)}.
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
*/
void onStreamClosed(Http2Stream stream);
@@ -53,6 +64,9 @@ interface Listener {
* Notifies the listener that the given stream has now been removed from the connection and
* will no longer be returned via {@link Http2Connection#stream(int)}. The connection may
* maintain inactive streams for some time before removing them.
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
*/
void onStreamRemoved(Http2Stream stream);
@@ -61,6 +75,9 @@ interface Listener {
* in a top down order relative to the priority tree. This method will also be invoked after all tree
* structure changes have been made and the tree is in steady state relative to the priority change
* which caused the tree structure to change.
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
* @param stream The stream which had a parent change (new parent and children will be steady state)
* @param oldParent The old parent which {@code stream} used to be a child of (may be {@code null})
*/
@@ -70,13 +87,19 @@ interface Listener {
* Notifies the listener that a parent dependency is about to change
* This is called while the tree is being restructured and so the tree
* structure is not necessarily steady state.
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
* @param stream The stream which the parent is about to change to {@code newParent}
* @param newParent The stream which will be the parent of {@code stream}
*/
void onPriorityTreeParentChanging(Http2Stream stream, Http2Stream newParent);
/**
- * Notifies the listener that the weight has changed for {@code stream}
+ * Notifies the listener that the weight has changed for {@code stream}.
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
* @param stream The stream which the weight has changed
* @param oldWeight The old weight for {@code stream}
*/
@@ -84,7 +107,9 @@ interface Listener {
/**
* Called when a {@code GOAWAY} frame was sent for the connection.
- *
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
* @param lastStreamId the last known stream of the remote endpoint.
* @param errorCode the error code, if abnormal closure.
* @param debugData application-defined debug data.
@@ -97,7 +122,9 @@ interface Listener {
* but is added here in order to simplify application logic for handling {@code GOAWAY} in a uniform way. An
* application should generally not handle both events, but if it does this method is called second, after
* notifying the {@link Http2FrameListener}.
- *
+ * <p>
+ * If a {@link RuntimeException} is thrown it will be logged and <strong>not propagated</strong>.
+ * Throwing from this method is not supported and is considered a programming error.
* @param lastStreamId the last known stream of the remote endpoint.
* @param errorCode the error code, if abnormal closure.
* @param debugData application-defined debug data.
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 6ea08cd72b3..51e63e60420 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -53,11 +53,11 @@
*/
public class Http2ConnectionHandler extends ByteToMessageDecoder implements Http2LifecycleManager,
ChannelOutboundHandler {
+ private static final InternalLogger logger = InternalLoggerFactory.getInstance(Http2ConnectionHandler.class);
private final Http2ConnectionDecoder decoder;
private final Http2ConnectionEncoder encoder;
private ChannelFutureListener closeListener;
private BaseDecoder byteDecoder;
- private static final InternalLogger logger = InternalLoggerFactory.getInstance(Http2ConnectionHandler.class);
public Http2ConnectionHandler(boolean server, Http2FrameListener listener) {
this(new DefaultHttp2Connection(server), listener);
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index e0447b3e19e..e145b8eb42c 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -21,28 +21,35 @@
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.anyInt;
+import static org.mockito.Matchers.anyLong;
import static org.mockito.Matchers.anyShort;
import static org.mockito.Matchers.eq;
import static org.mockito.Matchers.isNull;
+import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.reset;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
-
+import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.handler.codec.http2.Http2Connection.Endpoint;
import io.netty.handler.codec.http2.Http2Stream.State;
import io.netty.util.internal.PlatformDependent;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicReference;
+
import org.junit.Before;
import org.junit.Test;
import org.mockito.ArgumentCaptor;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
-
-import javax.xml.ws.Holder;
-import java.util.Arrays;
-import java.util.List;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
/**
* Tests for {@link DefaultHttp2Connection}.
@@ -55,6 +62,9 @@ public class DefaultHttp2ConnectionTest {
@Mock
private Http2Connection.Listener clientListener;
+ @Mock
+ private Http2Connection.Listener clientListener2;
+
@Before
public void setup() {
MockitoAnnotations.initMocks(this);
@@ -848,6 +858,155 @@ public void circularDependencyWithExclusiveShouldRestructureTree() throws Except
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
}
+ /**
+ * We force {@link #clientListener} methods to all throw a {@link RuntimeException} and verify the following:
+ * <ol>
+ * <li>all listener methods are called for both {@link #clientListener} and {@link #clientListener2}</li>
+ * <li>{@link #clientListener2} is notified after {@link #clientListener}</li>
+ * <li>{@link #clientListener2} methods are all still called despite {@link #clientListener}'s
+ * method throwing a {@link RuntimeException}</li>
+ * </ol>
+ */
+ @Test
+ public void listenerThrowShouldNotPreventOtherListenersFromBeingNotified() throws Http2Exception {
+ final boolean[] calledArray = new boolean[128];
+ // The following setup will ensure that clienListener throws exceptions, and marks a value in an array
+ // such that clientListener2 will verify that is is set or fail the test.
+ int methodIndex = 0;
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onStreamAdded(any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onStreamAdded(any(Http2Stream.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onStreamActive(any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onStreamActive(any(Http2Stream.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onStreamHalfClosed(any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onStreamHalfClosed(any(Http2Stream.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onStreamClosed(any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onStreamClosed(any(Http2Stream.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onStreamRemoved(any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onStreamRemoved(any(Http2Stream.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onPriorityTreeParentChanged(any(Http2Stream.class), any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onPriorityTreeParentChanged(any(Http2Stream.class), any(Http2Stream.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onPriorityTreeParentChanging(any(Http2Stream.class), any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onPriorityTreeParentChanging(any(Http2Stream.class), any(Http2Stream.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onWeightChanged(any(Http2Stream.class), anyShort());
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onWeightChanged(any(Http2Stream.class), anyShort());
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onGoAwaySent(anyInt(), anyLong(), any(ByteBuf.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onGoAwaySent(anyInt(), anyLong(), any(ByteBuf.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onGoAwayReceived(anyInt(), anyLong(), any(ByteBuf.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onGoAwayReceived(anyInt(), anyLong(), any(ByteBuf.class));
+
+ doAnswer(new ListenerExceptionThrower(calledArray, methodIndex))
+ .when(clientListener).onStreamAdded(any(Http2Stream.class));
+ doAnswer(new ListenerVerifyCallAnswer(calledArray, methodIndex++))
+ .when(clientListener2).onStreamAdded(any(Http2Stream.class));
+
+ // Now we add clienListener2 and exercise all listener functionality
+ try {
+ client.addListener(clientListener2);
+ Http2Stream stream = client.local().createStream(3);
+ verify(clientListener).onStreamAdded(any(Http2Stream.class));
+ verify(clientListener2).onStreamAdded(any(Http2Stream.class));
+
+ stream.open(false);
+ verify(clientListener).onStreamActive(any(Http2Stream.class));
+ verify(clientListener2).onStreamActive(any(Http2Stream.class));
+
+ stream.setPriority(0, (short) (stream.weight() + 1), true);
+ verify(clientListener).onWeightChanged(any(Http2Stream.class), anyShort());
+ verify(clientListener2).onWeightChanged(any(Http2Stream.class), anyShort());
+ verify(clientListener).onPriorityTreeParentChanged(any(Http2Stream.class),
+ any(Http2Stream.class));
+ verify(clientListener2).onPriorityTreeParentChanged(any(Http2Stream.class),
+ any(Http2Stream.class));
+ verify(clientListener).onPriorityTreeParentChanging(any(Http2Stream.class),
+ any(Http2Stream.class));
+ verify(clientListener2).onPriorityTreeParentChanging(any(Http2Stream.class),
+ any(Http2Stream.class));
+
+ stream.closeLocalSide();
+ verify(clientListener).onStreamHalfClosed(any(Http2Stream.class));
+ verify(clientListener2).onStreamHalfClosed(any(Http2Stream.class));
+
+ stream.close();
+ verify(clientListener).onStreamClosed(any(Http2Stream.class));
+ verify(clientListener2).onStreamClosed(any(Http2Stream.class));
+ verify(clientListener).onStreamRemoved(any(Http2Stream.class));
+ verify(clientListener2).onStreamRemoved(any(Http2Stream.class));
+
+ client.goAwaySent(client.connectionStream().id(), Http2Error.INTERNAL_ERROR.code(), Unpooled.EMPTY_BUFFER);
+ verify(clientListener).onGoAwaySent(anyInt(), anyLong(), any(ByteBuf.class));
+ verify(clientListener2).onGoAwaySent(anyInt(), anyLong(), any(ByteBuf.class));
+
+ client.goAwayReceived(client.connectionStream().id(),
+ Http2Error.INTERNAL_ERROR.code(), Unpooled.EMPTY_BUFFER);
+ verify(clientListener).onGoAwayReceived(anyInt(), anyLong(), any(ByteBuf.class));
+ verify(clientListener2).onGoAwayReceived(anyInt(), anyLong(), any(ByteBuf.class));
+ } finally {
+ client.removeListener(clientListener2);
+ }
+ }
+
+ private static final class ListenerExceptionThrower implements Answer<Void> {
+ private static final RuntimeException FAKE_EXCEPTION = new RuntimeException("Fake Exception");
+ private final boolean[] array;
+ private final int index;
+
+ public ListenerExceptionThrower(boolean[] array, int index) {
+ this.array = array;
+ this.index = index;
+ }
+
+ @Override
+ public Void answer(InvocationOnMock invocation) throws Throwable {
+ array[index] = true;
+ throw FAKE_EXCEPTION;
+ }
+ }
+
+ private static final class ListenerVerifyCallAnswer implements Answer<Void> {
+ private final boolean[] array;
+ private final int index;
+
+ public ListenerVerifyCallAnswer(boolean[] array, int index) {
+ this.array = array;
+ this.index = index;
+ }
+
+ @Override
+ public Void answer(InvocationOnMock invocation) throws Throwable {
+ assertTrue(array[index]);
+ return null;
+ }
+ }
+
private void verifyParentChanging(List<Http2Stream> expectedArg1, List<Http2Stream> expectedArg2) {
assertSame(expectedArg1.size(), expectedArg2.size());
ArgumentCaptor<Http2Stream> arg1Captor = ArgumentCaptor.forClass(Http2Stream.class);
@@ -912,18 +1071,18 @@ private void verifyParentChanged(Http2Stream stream, Http2Stream oldParent) {
}
private Http2Stream child(Http2Stream parent, final int id) {
try {
- final Holder<Http2Stream> streamHolder = new Holder<Http2Stream>();
+ final AtomicReference<Http2Stream> streamReference = new AtomicReference<Http2Stream>();
parent.forEachChild(new Http2StreamVisitor() {
@Override
public boolean visit(Http2Stream stream) throws Http2Exception {
if (stream.id() == id) {
- streamHolder.value = stream;
+ streamReference.set(stream);
return false;
}
return true;
}
});
- return streamHolder.value;
+ return streamReference.get();
} catch (Http2Exception e) {
PlatformDependent.throwException(e);
return null;
| test | train | 2015-04-13T09:02:10 | 2015-03-16T16:19:42Z | nmittler | val |
netty/netty/3594_3618 | netty/netty | netty/netty/3594 | netty/netty/3618 | [
"timestamp(timedelta=17.0, similarity=0.9976609011293613)"
] | e36c1436b80399175fad55d09848c9f29da2174e | dc1b86639cfcc1915609a2e12168b5358b964923 | [
"@nmittler @buchgr - FYI...I have a fix for this. Standby.\n",
"https://github.com/netty/netty/issues/3618 fixed this\n"
] | [
"maybe just call this `retain`?\n",
"Doesn't `removeStream` notify the listeners that the stream was removed?\n",
"if `oldParent != null`, isn't it an error condition if removal of the child from `oldParent` is null?\n",
"Isn't it an error condition if the `put` != null? Maybe we should throw if that's the ca... | 2015-04-10T23:21:37Z | [
"defect"
] | HTTP/2 Priority Tree circular link | There is an issue with the HTTP/2 priority tree algorithm where circular links can be created. The issue is if an exclusive dependency event described as `stream B should be an exclusive dependency of stream A` is requested and stream `B` is already a child of stream `A`...then we will add `B` to `B`'s own `children` map and create a circular link in the priority tree. This leads to an infinite recursive loop and a stack overflow exception.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 7c2fe828643..6ddf71ee78e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -32,12 +32,12 @@
import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_LOCAL;
import static io.netty.handler.codec.http2.Http2Stream.State.RESERVED_REMOTE;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
-
import io.netty.buffer.ByteBuf;
import io.netty.util.collection.IntObjectHashMap;
import io.netty.util.collection.IntObjectMap;
import io.netty.util.collection.PrimitiveCollections;
import io.netty.util.internal.PlatformDependent;
+import io.netty.util.internal.SystemPropertyUtil;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
@@ -61,6 +61,15 @@ public class DefaultHttp2Connection implements Http2Connection {
final ConnectionStream connectionStream = new ConnectionStream();
final DefaultEndpoint<Http2LocalFlowController> localEndpoint;
final DefaultEndpoint<Http2RemoteFlowController> remoteEndpoint;
+ /**
+ * The initial size of the children map is chosen to be conservative on initial memory allocations under
+ * the assumption that most streams will have a small number of children. This choice may be
+ * sub-optimal if when children are present there are many children (i.e. a web page which has many
+ * dependencies to load).
+ */
+ private static final int INITIAL_CHILDREN_MAP_SIZE =
+ Math.max(1, SystemPropertyUtil.getInt("io.netty.http2.childrenMapSize", 4));
+
/**
* We chose a {@link List} over a {@link Set} to avoid allocating an {@link Iterator} objects when iterating over
* the listeners.
@@ -453,7 +462,7 @@ private void incrementPrioritizableForTree(int amt, Http2Stream oldParent) {
* Instead use {@link #incrementPrioritizableForTree(int, Http2Stream)}.
*/
private void incrementPrioritizableForTree0(int amt, Http2Stream oldParent) {
- assert amt > 0;
+ assert amt > 0 && Integer.MAX_VALUE - amt >= prioritizableForTree;
prioritizableForTree += amt;
if (parent != null && parent != oldParent) {
parent.incrementPrioritizableForTree0(amt, oldParent);
@@ -475,7 +484,7 @@ private void decrementPrioritizableForTree(int amt) {
* Direct calls to this method are discouraged. Instead use {@link #decrementPrioritizableForTree(int)}.
*/
private void decrementPrioritizableForTree0(int amt) {
- assert amt > 0;
+ assert amt > 0 && prioritizableForTree >= amt;
prioritizableForTree -= amt;
if (parent != null) {
parent.decrementPrioritizableForTree0(amt);
@@ -501,10 +510,14 @@ private void notifyHalfClosed(Http2Stream stream) {
private void initChildrenIfEmpty() {
if (children == PrimitiveCollections.<DefaultStream>emptyIntObjectMap()) {
- children = new IntObjectHashMap<DefaultStream>(4);
+ initChildren();
}
}
+ private void initChildren() {
+ children = new IntObjectHashMap<DefaultStream>(INITIAL_CHILDREN_MAP_SIZE);
+ }
+
@Override
public final boolean remoteSideOpen() {
return state == HALF_CLOSED_LOCAL || state == OPEN || state == RESERVED_REMOTE;
@@ -541,11 +554,26 @@ final void weight(short weight) {
}
}
- final IntObjectMap<DefaultStream> removeAllChildren() {
- totalChildWeights = 0;
- prioritizableForTree = isPrioritizable() ? 1 : 0;
+ /**
+ * Remove all children with the exception of {@code streamToRetain}.
+ * This method is intended to be used to support an exclusive priority dependency operation.
+ * @return The map of children prior to this operation, excluding {@code streamToRetain} if present.
+ */
+ private IntObjectMap<DefaultStream> retain(DefaultStream streamToRetain) {
+ streamToRetain = children.remove(streamToRetain.id());
IntObjectMap<DefaultStream> prevChildren = children;
- children = PrimitiveCollections.emptyIntObjectMap();
+ // This map should be re-initialized in anticipation for the 1 exclusive child which will be added.
+ // It will either be added directly in this method, or after this method is called...but it will be added.
+ initChildren();
+ if (streamToRetain == null) {
+ totalChildWeights = 0;
+ prioritizableForTree = isPrioritizable() ? 1 : 0;
+ } else {
+ totalChildWeights = streamToRetain.weight();
+ // prioritizableForTree does not change because it is assumed all children node will still be
+ // descendants through an exclusive priority tree operation.
+ children.put(streamToRetain.id(), streamToRetain);
+ }
return prevChildren;
}
@@ -555,33 +583,44 @@ final IntObjectMap<DefaultStream> removeAllChildren() {
*/
final void takeChild(DefaultStream child, boolean exclusive, List<ParentChangedEvent> events) {
DefaultStream oldParent = child.parent();
- events.add(new ParentChangedEvent(child, oldParent));
- notifyParentChanging(child, this);
- child.parent = this;
- if (exclusive && !children.isEmpty()) {
- // If it was requested that this child be the exclusive dependency of this node,
- // move any previous children to the child node, becoming grand children of this node.
- for (DefaultStream grandchild : removeAllChildren().values()) {
- child.takeChild(grandchild, false, events);
+ if (oldParent != this) {
+ events.add(new ParentChangedEvent(child, oldParent));
+ notifyParentChanging(child, this);
+ child.parent = this;
+ // We need the removal operation to happen first so the prioritizableForTree for the old parent to root
+ // path is updated with the correct child.prioritizableForTree() value. Note that the removal operation
+ // may not be successful and may return null. This is because when an exclusive dependency is processed
+ // the children are removed in a previous recursive call but the child's parent link is updated here.
+ if (oldParent != null && oldParent.children.remove(child.id()) != null) {
+ oldParent.totalChildWeights -= child.weight();
+ if (!child.isDescendantOf(oldParent)) {
+ oldParent.decrementPrioritizableForTree(child.prioritizableForTree());
+ if (oldParent.prioritizableForTree() == 0) {
+ // There are a few risks with immediately removing nodes from the priority tree:
+ // 1. We are removing nodes while we are potentially shifting the tree. There are no
+ // concrete cases known but is risky because it could invalidate the data structure.
+ // 2. We are notifying listeners of the removal while the tree is in flux. Currently the
+ // codec listeners make no assumptions about priority tree structure when being notified.
+ removeStream(oldParent);
+ }
+ }
}
- }
- // Lazily initialize the children to save object allocations.
- initChildrenIfEmpty();
+ // Lazily initialize the children to save object allocations.
+ initChildrenIfEmpty();
- if (children.put(child.id(), child) == null) {
+ final Http2Stream oldChild = children.put(child.id(), child);
+ assert oldChild == null : "A stream with the same stream ID was already in the child map.";
totalChildWeights += child.weight();
incrementPrioritizableForTree(child.prioritizableForTree(), oldParent);
}
- if (oldParent != null && oldParent.children.remove(child.id()) != null) {
- oldParent.totalChildWeights -= child.weight();
- if (!child.isDescendantOf(oldParent)) {
- oldParent.decrementPrioritizableForTree(child.prioritizableForTree());
- if (oldParent.prioritizableForTree() == 0) {
- removeStream(oldParent);
- }
+ if (exclusive && !children.isEmpty()) {
+ // If it was requested that this child be the exclusive dependency of this node,
+ // move any previous children to the child node, becoming grand children of this node.
+ for (DefaultStream grandchild : retain(child).values()) {
+ child.takeChild(grandchild, false, events);
}
}
}
@@ -604,6 +643,11 @@ final boolean removeChild(DefaultStream child) {
}
if (prioritizableForTree() == 0) {
+ // There are a few risks with immediately removing nodes from the priority tree:
+ // 1. We are removing nodes while we are potentially shifting the tree. There are no
+ // concrete cases known but is risky because it could invalidate the data structure.
+ // 2. We are notifying listeners of the removal while the tree is in flux. Currently the
+ // codec listeners make no assumptions about priority tree structure when being notified.
removeStream(this);
}
notifyParentChanged(events);
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
index e145b8eb42c..2569d5d5c1c 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2ConnectionTest.java
@@ -362,6 +362,130 @@ public void insertExclusiveShouldAddNewLevel() throws Exception {
assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
}
+ @Test
+ public void existingChildMadeExclusiveShouldNotCreateTreeCycle() throws Http2Exception {
+ Http2Stream streamA = client.local().createStream(1).open(false);
+ Http2Stream streamB = client.local().createStream(3).open(false);
+ Http2Stream streamC = client.local().createStream(5).open(false);
+ Http2Stream streamD = client.local().createStream(7).open(false);
+
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+
+ // Stream C is already dependent on Stream A, but now make that an exclusive dependency
+ streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, true);
+
+ assertEquals(4, client.numActiveStreams());
+
+ // Level 0
+ Http2Stream p = client.connectionStream();
+ assertEquals(1, p.numChildren());
+ assertEquals(5, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 1
+ p = child(p, streamA.id());
+ assertNotNull(p);
+ assertEquals(0, p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(4, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 2
+ p = child(p, streamC.id());
+ assertNotNull(p);
+ assertEquals(streamA.id(), p.parent().id());
+ assertEquals(2, p.numChildren());
+ assertEquals(3, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 3
+ p = child(p, streamB.id());
+ assertNotNull(p);
+ assertEquals(streamC.id(), p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+ p = child(p.parent(), streamD.id());
+ assertNotNull(p);
+ assertEquals(streamC.id(), p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+ }
+
+ @Test
+ public void newExclusiveChildShouldUpdateOldParentCorrectly() throws Http2Exception {
+ Http2Stream streamA = client.local().createStream(1).open(false);
+ Http2Stream streamB = client.local().createStream(3).open(false);
+ Http2Stream streamC = client.local().createStream(5).open(false);
+ Http2Stream streamD = client.local().createStream(7).open(false);
+ Http2Stream streamE = client.local().createStream(9).open(false);
+ Http2Stream streamF = client.local().createStream(11).open(false);
+
+ streamB.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamC.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamD.setPriority(streamC.id(), DEFAULT_PRIORITY_WEIGHT, false);
+ streamF.setPriority(streamE.id(), DEFAULT_PRIORITY_WEIGHT, false);
+
+ // F is now going to be exclusively dependent on A, after this we should check that stream E
+ // prioritizableForTree is not over decremented.
+ streamF.setPriority(streamA.id(), DEFAULT_PRIORITY_WEIGHT, true);
+
+ assertEquals(6, client.numActiveStreams());
+
+ // Level 0
+ Http2Stream p = client.connectionStream();
+ assertEquals(2, p.numChildren());
+ assertEquals(7, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 1
+ p = child(p, streamE.id());
+ assertNotNull(p);
+ assertEquals(0, p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+ p = child(p.parent(), streamA.id());
+ assertNotNull(p);
+ assertEquals(0, p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(5, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 2
+ p = child(p, streamF.id());
+ assertNotNull(p);
+ assertEquals(streamA.id(), p.parent().id());
+ assertEquals(2, p.numChildren());
+ assertEquals(4, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 3
+ p = child(p, streamB.id());
+ assertNotNull(p);
+ assertEquals(streamF.id(), p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+ p = child(p.parent(), streamC.id());
+ assertNotNull(p);
+ assertEquals(streamF.id(), p.parent().id());
+ assertEquals(1, p.numChildren());
+ assertEquals(2, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+
+ // Level 4
+ p = child(p, streamD.id());
+ assertNotNull(p);
+ assertEquals(streamC.id(), p.parent().id());
+ assertEquals(0, p.numChildren());
+ assertEquals(1, p.prioritizableForTree());
+ assertEquals(p.numChildren() * DEFAULT_PRIORITY_WEIGHT, p.totalChildWeights());
+ }
+
@Test
public void weightChangeWithNoTreeChangeShouldNotifyListeners() throws Http2Exception {
Http2Stream streamA = client.local().createStream(1).open(false);
| train | train | 2015-04-15T02:11:09 | 2015-04-07T01:38:14Z | Scottmitch | val |
netty/netty/2925_3624 | netty/netty | netty/netty/2925 | netty/netty/3624 | [
"timestamp(timedelta=18.0, similarity=0.8766544177695756)"
] | 418c81542b9a907af1e2cfabea78f8409525cac4 | 541301203862145d511f30d3d3311f10992e9b31 | [
"@garretwu thanks for reporting... I will check and fix it\n",
"@garretwu I'm right you say it should get the 7th cache bin(start from0) . right ?\n",
"you are right, @normanmaurer. That is what I mean.\n",
"Thanks for clarify!\n\n> Am 12.11.2014 um 03:52 schrieb garretwu notifications@github.com:\n> \n> you ... | [] | 2015-04-13T05:52:51Z | [
"defect"
] | NormalMemoryRegionCache's arraysize is over booked | When create NormalMemoryRegionCache for PoolThreadCache. The array size is decided by:
int max = Math.min(area.chunkSize, maxCachedBufferCapacity);
int arraySize = Math.max(1, max / area.pageSize);
So most likely it is decided by maxCachedBufferCapacity/area.pageSize, with maxCachedBufferCapacity=2M the array size will be 2^8 = 256.
when we try to get normal cache the index is pre-processed by log2():
PoolThreadCache.cacheForNormal():
int idx = log2(normCapacity >> numShiftsNormalHeap);
return cache(normalHeapCaches, idx);
A 2M cache entry will get from the 8th cache bin(start from 0).
The space for NormalMemoryRegionCache is over booked, though it will not impact correctness(just a little performance..).
| [
"buffer/src/main/java/io/netty/buffer/PoolThreadCache.java"
] | [
"buffer/src/main/java/io/netty/buffer/PoolThreadCache.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java b/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
index 755f1573ed7..fbcfaafffb1 100644
--- a/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
+++ b/buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
@@ -129,7 +129,7 @@ private static <T> NormalMemoryRegionCache<T>[] createNormalCaches(
int cacheSize, int maxCachedBufferCapacity, PoolArena<T> area) {
if (cacheSize > 0) {
int max = Math.min(area.chunkSize, maxCachedBufferCapacity);
- int arraySize = Math.max(1, max / area.pageSize);
+ int arraySize = Math.max(1, log2(max / area.pageSize) + 1);
@SuppressWarnings("unchecked")
NormalMemoryRegionCache<T>[] cache = new NormalMemoryRegionCache[arraySize];
| null | train | train | 2015-04-12T13:38:20 | 2014-09-22T04:56:22Z | garretwu | val |
netty/netty/3637_3638 | netty/netty | netty/netty/3637 | netty/netty/3638 | [
"timestamp(timedelta=85.0, similarity=0.9592301228970215)"
] | 9e5dd21d23739bc8462c31a71bae3a9456ed67f2 | 30b20de0b985474755113d9a9626196f626c7276 | [
"Fixed by #3638\n"
] | [] | 2015-04-16T11:48:02Z | [
"defect"
] | AggregatedFullHttpMessage toString() behaviour | This is related to #3019, the previous problem persists:
Prior to v4.0.24-FINAL you could call .toString on the output of HttpObjectAggregator regardless of whether the resulting HttpMessage has been released or not, from v.4.0.24-FINAL onwards you will encounter an error similar to:
```
io.netty.util.IllegalReferenceCountException: refCnt: 0
at io.netty.buffer.DefaultByteBufHolder.content(DefaultByteBufHolder.java:39) ~[netty-all-4.0.27.Final.jar:4.0.27.Final]
at io.netty.handler.codec.http.HttpMessageUtil.appendFullCommon(HttpMessageUtil.java:79) ~[netty-all-4.0.27.Final.jar:4.0.27.Final]
at io.netty.handler.codec.http.HttpMessageUtil.appendFullRequest(HttpMessageUtil.java:55) ~[netty-all-4.0.27.Final.jar:4.0.27.Final]
```
This is due to the fact that AggregatedFullHttpMessage overrides .copy, .duplicate, and .retain from the ByteBufHolder interface but delegates .content to the DefaultByteBufHolder superclass which checks the ref count every time you access content:
``` java
@Override
public ByteBuf content() {
if (data.refCnt() <= 0) {
throw new IllegalReferenceCountException(data.refCnt());
}
return data;
}
```
whereas something like DefaultHttpContent does not check that count:
``` java
@Override
public ByteBuf content() {
return content;
}
```
It appears the AggregateFullHttpMessage only takes the benefit of .copy and .release from that type hierarchy, so I offer a PR where that is removed and the impl is similar internally to DefaultHttpContent, apologies if I've missed something obvious.
**context**
I have a Clojure networking framework which combines netty, clojure/core.async, and stuartsierra/component in an attempt at an async, event-driven architecture.
Basically you can combine components using core.async channels and macros, netty is plugged in as the network layer which translates input/output events to/from requests/responses. Any netty codec would work but in reality I use a fairly standard HTTP channel pipeline most often, and HttpObjectAggregator features in that.
I can manage the lifecycle of pooled bytebuf cleanly, or for some applications when each request is decoded and converted to an event I copy the bytebuf into memory as a single java.util.ByteBuffer and release the original at that point. In that case I still place the request alongside the ByteBuffer on the event, making it available for further processing - i.e. decoding the query-string or http-headers.
On occasion events are logged to disk, which calls toString, which errors if the msg has been released, where previously it did not.
There are multiple solutions to my problem, I could always manage pooled buffers and not memory copy, I could always configure my server to use Unpooled buffers, but all the other HttpMessage impl seem to be fine with .toString when released and the original behaviour has changed so I thought I'd chip in again.
Ta,
Derek
| [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java"
] | [
"codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java"
] | [] | diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java
index 9a504669502..415e2a849d0 100644
--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java
+++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectAggregator.java
@@ -16,6 +16,7 @@
package io.netty.handler.codec.http;
import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufHolder;
import io.netty.buffer.CompositeByteBuf;
import io.netty.buffer.DefaultByteBufHolder;
import io.netty.buffer.Unpooled;
@@ -277,13 +278,14 @@ private static FullHttpMessage toFullMessage(HttpMessage msg) {
return fullMsg;
}
- private abstract static class AggregatedFullHttpMessage extends DefaultByteBufHolder implements FullHttpMessage {
+ private abstract static class AggregatedFullHttpMessage implements ByteBufHolder, FullHttpMessage {
protected final HttpMessage message;
+ private final ByteBuf content;
private HttpHeaders trailingHeaders;
AggregatedFullHttpMessage(HttpMessage message, ByteBuf content, HttpHeaders trailingHeaders) {
- super(content);
this.message = message;
+ this.content = content;
this.trailingHeaders = trailingHeaders;
}
@@ -328,17 +330,37 @@ public void setDecoderResult(DecoderResult result) {
}
@Override
- public FullHttpMessage retain(int increment) {
- super.retain(increment);
- return this;
+ public ByteBuf content() {
+ return content;
+ }
+
+ @Override
+ public int refCnt() {
+ return content.refCnt();
}
@Override
public FullHttpMessage retain() {
- super.retain();
+ content.retain();
+ return this;
+ }
+
+ @Override
+ public FullHttpMessage retain(int increment) {
+ content.retain(increment);
return this;
}
+ @Override
+ public boolean release() {
+ return content.release();
+ }
+
+ @Override
+ public boolean release(int decrement) {
+ return content.release(decrement);
+ }
+
@Override
public abstract FullHttpMessage copy();
| null | train | train | 2015-04-14T17:59:33 | 2015-04-16T11:45:13Z | d-t-w | val |
netty/netty/3668_3672 | netty/netty | netty/netty/3668 | netty/netty/3672 | [
"timestamp(timedelta=55.0, similarity=0.8928221867187973)"
] | 43d57ddd46f0cf5db92469219654c155bd982e05 | 9611860d7b66fdeefb9a62baf7ba0a1f512f9a61 | [
"Thanks @nmittler. I see you left this as a comment in https://github.com/netty/netty/pull/3616. Are you pulling together a separate PR for this?\n",
"I'm working on a PR to address this and history ... should have something ready shortly.\n",
"@nmittler - OK. I would like to get a consolidated understanding ... | [
"Do we have to do this? If we are forcing the unconsumed bytes to 0 in the connection listener, is that good enough...or is this method used while the stream is being closed before the listener is notified?\n",
"Was this intentional? I looked through the call path down to the FrameWriter and I don't see a null c... | 2015-04-20T23:04:29Z | [
"defect"
] | Removed HTTP/2 streams should consume all bytes | When a stream is removed from the connection, we should mark all of it's bytes as consumed so that the local flow controller will restore those bytes to the connection window. We'll also need to protect against the application trying to consume those bytes as well (probably just make it a no-op for closed streams).
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
index 6ddf71ee78e..d6b333c32ad 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2Connection.java
@@ -1121,19 +1121,19 @@ void addToActiveStreams(DefaultStream stream) {
void removeFromActiveStreams(DefaultStream stream) {
if (streams.remove(stream)) {
- try {
- // Update the number of active streams initiated by the endpoint.
- stream.createdBy().numActiveStreams--;
+ // Update the number of active streams initiated by the endpoint.
+ stream.createdBy().numActiveStreams--;
+ }
+ notifyClosed(stream);
+ removeStream(stream);
+ }
- for (int i = 0; i < listeners.size(); i++) {
- try {
- listeners.get(i).onStreamClosed(stream);
- } catch (RuntimeException e) {
- logger.error("Caught RuntimeException from listener onStreamClosed.", e);
- }
- }
- } finally {
- removeStream(stream);
+ private void notifyClosed(DefaultStream stream) {
+ for (int i = 0; i < listeners.size(); i++) {
+ try {
+ listeners.get(i).onStreamClosed(stream);
+ } catch (RuntimeException e) {
+ logger.error("Caught RuntimeException from listener onStreamClosed.", e);
}
}
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java
index 2fa90dc7598..aca6a581c73 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java
@@ -30,6 +30,7 @@
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.http2.Http2Exception.CompositeStreamException;
import io.netty.handler.codec.http2.Http2Exception.StreamException;
+import io.netty.util.internal.PlatformDependent;
/**
* Basic implementation of {@link Http2LocalFlowController}.
@@ -43,6 +44,7 @@ public class DefaultHttp2LocalFlowController implements Http2LocalFlowController
private final Http2Connection connection;
private final Http2FrameWriter frameWriter;
+ private ChannelHandlerContext ctx;
private volatile float windowUpdateRatio;
private volatile int initialWindowSize = DEFAULT_WINDOW_SIZE;
@@ -73,6 +75,22 @@ public void onStreamActive(Http2Stream stream) {
// frames which may have been exchanged while it was in IDLE
state(stream).window(initialWindowSize);
}
+
+ @Override
+ public void onStreamClosed(Http2Stream stream) {
+ try {
+ // When a stream is closed, consume any remaining bytes so that they
+ // are restored to the connection window.
+ FlowState state = state(stream);
+ int unconsumedBytes = state.unconsumedBytes();
+ if (ctx != null && unconsumedBytes > 0) {
+ connectionState().consumeBytes(ctx, unconsumedBytes);
+ state.consumeBytes(ctx, unconsumedBytes);
+ }
+ } catch (Http2Exception e) {
+ PlatformDependent.throwException(e);
+ }
+ }
});
}
@@ -109,7 +127,19 @@ public void incrementWindowSize(ChannelHandlerContext ctx, Http2Stream stream, i
@Override
public void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes)
throws Http2Exception {
- state(stream).consumeBytes(ctx, numBytes);
+ if (stream.id() == CONNECTION_STREAM_ID) {
+ throw new UnsupportedOperationException("Returning bytes for the connection window is not supported");
+ }
+ if (numBytes <= 0) {
+ throw new IllegalArgumentException("numBytes must be positive");
+ }
+
+ // Streams automatically consume all remaining bytes when they are closed, so just ignore
+ // if already closed.
+ if (!isClosed(stream)) {
+ connectionState().consumeBytes(ctx, numBytes);
+ state(stream).consumeBytes(ctx, numBytes);
+ }
}
@Override
@@ -178,15 +208,22 @@ public float windowUpdateRatio(Http2Stream stream) throws Http2Exception {
@Override
public void receiveFlowControlledFrame(ChannelHandlerContext ctx, Http2Stream stream, ByteBuf data,
int padding, boolean endOfStream) throws Http2Exception {
+ this.ctx = checkNotNull(ctx, "ctx");
int dataLength = data.readableBytes() + padding;
// Apply the connection-level flow control
- connectionState().receiveFlowControlledFrame(dataLength);
-
- // Apply the stream-level flow control
- FlowState state = state(stream);
- state.endOfStream(endOfStream);
- state.receiveFlowControlledFrame(dataLength);
+ FlowState connectionState = connectionState();
+ connectionState.receiveFlowControlledFrame(dataLength);
+
+ if (!isClosed(stream)) {
+ // Apply the stream-level flow control
+ FlowState state = state(stream);
+ state.endOfStream(endOfStream);
+ state.receiveFlowControlledFrame(dataLength);
+ } else if (dataLength > 0) {
+ // Immediately consume the bytes for the connection window.
+ connectionState.consumeBytes(ctx, dataLength);
+ }
}
private FlowState connectionState() {
@@ -198,6 +235,10 @@ private static FlowState state(Http2Stream stream) {
return stream.getProperty(FlowState.class);
}
+ private static boolean isClosed(Http2Stream stream) {
+ return stream.state() == Http2Stream.State.CLOSED;
+ }
+
/**
* Flow control window state for an individual stream.
*/
@@ -322,18 +363,6 @@ void returnProcessedBytes(int delta) throws Http2Exception {
}
void consumeBytes(ChannelHandlerContext ctx, int numBytes) throws Http2Exception {
- if (stream.id() == CONNECTION_STREAM_ID) {
- throw new UnsupportedOperationException("Returning bytes for the connection window is not supported");
- }
- if (numBytes <= 0) {
- throw new IllegalArgumentException("numBytes must be positive");
- }
-
- // Return bytes to the connection window
- FlowState connectionState = connectionState();
- connectionState.returnProcessedBytes(numBytes);
- connectionState.writeWindowUpdateIfNeeded(ctx);
-
// Return the bytes processed and update the window.
returnProcessedBytes(numBytes);
writeWindowUpdateIfNeeded(ctx);
@@ -347,7 +376,7 @@ int unconsumedBytes() {
* Updates the flow control window for this stream if it is appropriate.
*/
void writeWindowUpdateIfNeeded(ChannelHandlerContext ctx) throws Http2Exception {
- if (endOfStream || initialStreamWindowSize <= 0) {
+ if (endOfStream || initialStreamWindowSize <= 0 || isClosed(stream)) {
return;
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java
index b852b249f59..83d193350e3 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java
@@ -28,6 +28,8 @@ public interface Http2LocalFlowController extends Http2FlowController {
* policies to it for both the {@code stream} as well as the connection. If any flow control
* policies have been violated, an exception is raised immediately, otherwise the frame is
* considered to have "passed" flow control.
+ * <p/>
+ * If {@code stream} is closed, flow control should only be applied to the connection window.
*
* @param ctx the context from the handler where the frame was read.
* @param stream the subject stream for the received frame. The connection stream object must
@@ -39,22 +41,24 @@ public interface Http2LocalFlowController extends Http2FlowController {
* @throws Http2Exception if any flow control errors are encountered.
*/
void receiveFlowControlledFrame(ChannelHandlerContext ctx, Http2Stream stream, ByteBuf data, int padding,
- boolean endOfStream) throws Http2Exception;
+ boolean endOfStream) throws Http2Exception;
/**
- * Indicates that the application has consumed a number of bytes for the given stream and is
- * therefore ready to receive more data from the remote endpoint. The application must consume
- * any bytes that it receives or the flow control window will collapse. Consuming bytes enables
- * the flow controller to send {@code WINDOW_UPDATE} to restore a portion of the flow control
- * window for the stream.
+ * Indicates that the application has consumed a number of bytes for the given stream and is therefore ready to
+ * receive more data from the remote endpoint. The application must consume any bytes that it receives or the flow
+ * control window will collapse. Consuming bytes enables the flow controller to send {@code WINDOW_UPDATE} to
+ * restore a portion of the flow control window for the stream.
+ * <p/>
+ * If {@code stream} is closed (i.e. {@link Http2Stream#state()} method returns {@link Http2Stream.State#CLOSED}),
+ * the consumed bytes are only restored to the connection window. When a stream is closed, the flow controller
+ * automatically restores any unconsumed bytes for that stream to the connection window. This is done to ensure that
+ * the connection window does not degrade over time as streams are closed.
*
- * @param ctx the channel handler context to use when sending a {@code WINDOW_UPDATE} if
- * appropriate
- * @param stream the stream for which window space should be freed. The connection stream object
- * must not be used.
+ * @param ctx the channel handler context to use when sending a {@code WINDOW_UPDATE} if appropriate
+ * @param stream the stream for which window space should be freed. The connection stream object must not be used.
* @param numBytes the number of bytes to be returned to the flow control window.
- * @throws Http2Exception if the number of bytes returned exceeds the {@link #unconsumedBytes}
- * for the stream.
+ * @throws Http2Exception if the number of bytes returned exceeds the {@link #unconsumedBytes(Http2Stream)} for the
+ * stream.
*/
void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes) throws Http2Exception;
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
index 1e324426d37..94d8b14d4d1 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
@@ -201,6 +201,22 @@ public void connectionWindowShouldAdjustWithMultipleStreams() throws Http2Except
}
}
+ @Test
+ public void closeShouldConsumeBytes() throws Http2Exception {
+ receiveFlowControlledFrame(STREAM_ID, 10, 0, false);
+ assertEquals(10, controller.unconsumedBytes(connection.connectionStream()));
+ stream(STREAM_ID).close();
+ assertEquals(0, controller.unconsumedBytes(connection.connectionStream()));
+ }
+
+ @Test
+ public void dataReceivedForClosedStreamShouldImmediatelyConsumeBytes() throws Http2Exception {
+ Http2Stream stream = stream(STREAM_ID);
+ stream.close();
+ receiveFlowControlledFrame(stream, 10, 0, false);
+ assertEquals(0, controller.unconsumedBytes(connection.connectionStream()));
+ }
+
@Test
public void globalRatioShouldImpactStreams() throws Http2Exception {
float ratio = 0.6f;
@@ -254,10 +270,15 @@ private static int getWindowDelta(int initialSize, int windowSize, int dataSize)
}
private void receiveFlowControlledFrame(int streamId, int dataSize, int padding,
- boolean endOfStream) throws Http2Exception {
+ boolean endOfStream) throws Http2Exception {
+ receiveFlowControlledFrame(stream(streamId), dataSize, padding, endOfStream);
+ }
+
+ private void receiveFlowControlledFrame(Http2Stream stream, int dataSize, int padding,
+ boolean endOfStream) throws Http2Exception {
final ByteBuf buf = dummyData(dataSize);
try {
- controller.receiveFlowControlledFrame(ctx, stream(streamId), buf, padding, endOfStream);
+ controller.receiveFlowControlledFrame(ctx, stream, buf, padding, endOfStream);
} finally {
buf.release();
}
| train | train | 2015-04-20T10:45:43 | 2015-04-20T19:33:21Z | nmittler | val |
netty/netty/3671_3674 | netty/netty | netty/netty/3671 | netty/netty/3674 | [
"timestamp(timedelta=31.0, similarity=0.8934061479207374)"
] | f242bc5a1a8b961b54aa127230818cff845009d6 | 8cac1af2f7f123df9424346aae737286d2a78f32 | [
"@leogomes - Can you be more specific? I looks like you are implying the context that is generated by `build()` is reversed. However it looks to be correct https://github.com/netty/netty/blob/4d56028df5a7be31561442f45bc2141eb84aa5ad/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java#L206.\n",
"Do... | [] | 2015-04-20T23:40:59Z | [
"defect"
] | SSLContextBuilder builds client context for server and vice-versa | Introduced with: https://github.com/netty/netty/issues/3531
SslContextBuilder constructor has a [boolean parameter `forServer`](https://github.com/netty/netty/blob/4d56028df5a7be31561442f45bc2141eb84aa5ad/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java#L75), which is set to `false` in the [`forServer(...)` static factory method](https://github.com/netty/netty/blob/4d56028df5a7be31561442f45bc2141eb84aa5ad/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java#L57) and set to `true` in the `forClient(...)` equivalent. Leading the `build()` method to build a client context for a server and a server context for a client. It turns out that it doesn't seem to have caused too much damage nor there are tests failing for that. So, while I could have sent a quick PR to fix that, I think it deserves a bit more testing to make sure that this sort of thing is caught/analysis to see if we really need those two types of contexts.
| [
"handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java b/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java
index e935e7c8b32..fcc840e6ca8 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java
@@ -31,7 +31,7 @@ public final class SslContextBuilder {
* Creates a builder for new client-side {@link SslContext}.
*/
public static SslContextBuilder forClient() {
- return new SslContextBuilder(true);
+ return new SslContextBuilder(false);
}
/**
@@ -41,7 +41,7 @@ public static SslContextBuilder forClient() {
* @param keyFile a PKCS#8 private key file in PEM format
*/
public static SslContextBuilder forServer(File keyCertChainFile, File keyFile) {
- return new SslContextBuilder(false).keyManager(keyCertChainFile, keyFile);
+ return new SslContextBuilder(true).keyManager(keyCertChainFile, keyFile);
}
/**
@@ -54,7 +54,7 @@ public static SslContextBuilder forServer(File keyCertChainFile, File keyFile) {
*/
public static SslContextBuilder forServer(
File keyCertChainFile, File keyFile, String keyPassword) {
- return new SslContextBuilder(false).keyManager(keyCertChainFile, keyFile, keyPassword);
+ return new SslContextBuilder(true).keyManager(keyCertChainFile, keyFile, keyPassword);
}
private final boolean forServer;
| null | train | train | 2015-04-20T10:45:28 | 2015-04-20T22:38:27Z | leogomes | val |
netty/netty/3675_3684 | netty/netty | netty/netty/3675 | netty/netty/3684 | [
"timestamp(timedelta=47.0, similarity=0.9132557124718427)"
] | a8af0debcbf5033c8c0d90ff79c2ec87960816fc | d320c43e622fc837496e10d2a168d3f997622817 | [
"@gsoltis so it only happens when you call size() ?\n",
"So far, yes, that's the only place I've seen it. We have each event loop hooked up to a stats reporting module that reports the size of the task queue every few seconds. We noticed it because the stats stopped being reported, and it turns out the reporting ... | [
"Does the runtime type of `toList(..)` have to be `ArrayList<>`? Can you just change the signature of `toList(..)` to accept a `List<..>` since it is private? That way we don't have to duplicate the `16 is the default size` logic here. \n",
"@Scottmitch sorry but I not understand what you are proposing.. IF I ... | 2015-04-21T19:52:21Z | [
"defect"
] | Livelock issue in MpscLinkedQueue.java | Hi,
We've been running netty 4.0.27 in production for ~1 week, and recently we started noticing cases where calls to `SingleThreadEventExecutor.pendingTasks()` were never completing. We traced the issue down to `MpscLinkedQueue.peekNode()`, which appears to be just spinning.
Reading the contract at the top of the file seems to indicate that calling `MpscLinkedQueue.size()` should be ok from multiple threads, but perhaps I am reading it incorrectly?
Unfortunately I don't have a good way of reproducing this, it only showed up in production. For now we've removed the code that was calling `.size()`, but if anyone has suggestions on a way to try reproducing the bug (or suggestions as to how we're violating the queue contract), I'd be happy to try.
| [
"common/src/main/java/io/netty/util/internal/MpscLinkedQueue.java"
] | [
"common/src/main/java/io/netty/util/internal/MpscLinkedQueue.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/internal/MpscLinkedQueue.java b/common/src/main/java/io/netty/util/internal/MpscLinkedQueue.java
index 4baa5ac0fb2..c1b931a5628 100644
--- a/common/src/main/java/io/netty/util/internal/MpscLinkedQueue.java
+++ b/common/src/main/java/io/netty/util/internal/MpscLinkedQueue.java
@@ -21,10 +21,10 @@
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
-import java.lang.reflect.Array;
-import java.util.Arrays;
+import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
+import java.util.List;
import java.util.NoSuchElementException;
import java.util.Queue;
@@ -164,12 +164,20 @@ public E peek() {
public int size() {
int count = 0;
MpscLinkedQueueNode<E> n = peekNode();
- for (;;) {
- if (n == null) {
+ for (;;) {
+ // If value == null it means that clearMaybe() was called on the MpscLinkedQueueNode.
+ if (n == null || n.value() == null) {
+ break;
+ }
+ MpscLinkedQueueNode<E> next = n.next();
+ if (n == next) {
+ break;
+ }
+ n = next;
+ if (++ count == Integer.MAX_VALUE) {
+ // Guard against overflow of integer.
break;
}
- count ++;
- n = n.next();
}
return count;
}
@@ -186,40 +194,26 @@ public boolean contains(Object o) {
if (n == null) {
break;
}
- if (n.value() == o) {
+ E value = n.value();
+ // If value == null it means that clearMaybe() was called on the MpscLinkedQueueNode.
+ if (value == null) {
+ return false;
+ }
+ if (value == o) {
return true;
}
- n = n.next();
+ MpscLinkedQueueNode<E> next = n.next();
+ if (n == next) {
+ break;
+ }
+ n = next;
}
return false;
}
@Override
public Iterator<E> iterator() {
- return new Iterator<E>() {
- private MpscLinkedQueueNode<E> node = peekNode();
-
- @Override
- public boolean hasNext() {
- return node != null;
- }
-
- @Override
- public E next() {
- MpscLinkedQueueNode<E> node = this.node;
- if (node == null) {
- throw new NoSuchElementException();
- }
- E value = node.value();
- this.node = node.next();
- return value;
- }
-
- @Override
- public void remove() {
- throw new UnsupportedOperationException();
- }
- };
+ return new ReadOnlyIterator<E>(toList().iterator());
}
@Override
@@ -248,53 +242,46 @@ public E element() {
throw new NoSuchElementException();
}
- @Override
- public Object[] toArray() {
- final Object[] array = new Object[size()];
- final Iterator<E> it = iterator();
- for (int i = 0; i < array.length; i ++) {
- if (it.hasNext()) {
- array[i] = it.next();
- } else {
- return Arrays.copyOf(array, i);
+ private List<E> toList(int initialCapacity) {
+ return toList(new ArrayList<E>(initialCapacity));
+ }
+
+ private List<E> toList() {
+ return toList(new ArrayList<E>());
+ }
+
+ private List<E> toList(List<E> elements) {
+ MpscLinkedQueueNode<E> n = peekNode();
+ for (;;) {
+ if (n == null) {
+ break;
+ }
+ E value = n.value();
+ if (value == null) {
+ break;
+ }
+ if (!elements.add(value)) {
+ // Seems like there is no space left, break here.
+ break;
+ }
+ MpscLinkedQueueNode<E> next = n.next();
+ if (n == next) {
+ break;
}
+ n = next;
}
- return array;
+ return elements;
+ }
+
+ @Override
+ public Object[] toArray() {
+ return toList().toArray();
}
@Override
@SuppressWarnings("unchecked")
public <T> T[] toArray(T[] a) {
- final int size = size();
- final T[] array;
- if (a.length >= size) {
- array = a;
- } else {
- array = (T[]) Array.newInstance(a.getClass().getComponentType(), size);
- }
-
- final Iterator<E> it = iterator();
- for (int i = 0; i < array.length; i++) {
- if (it.hasNext()) {
- array[i] = (T) it.next();
- } else {
- if (a == array) {
- array[i] = null;
- return array;
- }
-
- if (a.length < i) {
- return Arrays.copyOf(array, i);
- }
-
- System.arraycopy(array, 0, a, 0, i);
- if (a.length > i) {
- a[i] = null;
- }
- return a;
- }
- }
- return array;
+ return toList(a.length).toArray(a);
}
@Override
| null | test | train | 2015-05-04T04:52:05 | 2015-04-21T01:02:18Z | gsoltis | val |
netty/netty/3653_3693 | netty/netty | netty/netty/3653 | netty/netty/3693 | [
"timestamp(timedelta=12.0, similarity=0.9324330647973355)"
] | 891be30a28c6dc5a1edf1cb5a3690644cf4ff66e | bf68dc74f73384a56d21c97ab934cfe43c0d2480 | [
"@ejona86 - Thanks for brining my attention to this!\n\n@nmittler - FYI.\n",
"@nmittler - I think this means we can get away without keeping history for https://github.com/netty/netty/issues/3557. We can just send a connection error in these \"bad\" conditions and it is OK if state is \"invalid\" because we are ... | [
"What is the difference between `ctx.close()` and `ctx.channel().close()`?\n",
"The [AbstractChannelHandlerContext.close(..)](https://github.com/netty/netty/blob/master/transport/src/main/java/io/netty/channel/AbstractChannelHandlerContext.java#L440) does a bit of extra work related to the `invoker` to ensure you... | 2015-04-24T17:37:08Z | [
"defect"
] | HTTP/2 send goaway with error condition must close tcp connection | [section 5.4.1](https://tools.ietf.org/html/draft-ietf-httpbis-http2-17#section-5.4.1):
> After sending the GOAWAY frame for an error condition,
> the endpoint MUST close the TCP connection.
We should update the life cycle manager's `goAway` documentation such that upon successful go away the channel must be closed. The `Htto2ConnectionHandler` should also be updated to implement this.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 668e4e798b0..0d996be0120 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -562,11 +562,17 @@ public ChannelFuture goAway(final ChannelHandlerContext ctx, final int lastStrea
future.addListener(new GenericFutureListener<ChannelFuture>() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
- if (!future.isSuccess()) {
- String msg = format("Sending GOAWAY failed: lastStreamId '%d', errorCode '%d', " +
- "debugData '%s'.", lastStreamId, errorCode, debugData);
- logger.error(msg, future.cause());
- ctx.channel().close();
+ if (future.isSuccess()) {
+ if (errorCode != NO_ERROR.code()) {
+ ctx.close();
+ }
+ } else {
+ if (logger.isErrorEnabled()) {
+ logger.error(
+ format("Sending GOAWAY failed: lastStreamId '%d', errorCode '%d', debugData '%s'.",
+ lastStreamId, errorCode, debugData), future.cause());
+ }
+ ctx.close();
}
}
});
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
index 03904e82516..5efd94a8f03 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LifecycleManager.java
@@ -66,11 +66,12 @@ ChannelFuture resetStream(ChannelHandlerContext ctx, int streamId, long errorCod
ChannelPromise promise);
/**
- * Close the connection and prevent the peer from creating streams. After this call the peer
- * is not allowed to create any new streams and the local endpoint will be limited to creating streams with
- * {@code stream identifier <= lastStreamId}. This may result in sending a {@code GO_AWAY} frame (assuming we
- * have not already sent one with {@code Last-Stream-ID <= lastStreamId}), or may just return success if a
- * {@code GO_AWAY} has previously been sent.
+ * Prevents the peer from creating streams and close the connection if {@code errorCode} is not
+ * {@link Http2Error#NO_ERROR}. After this call the peer is not allowed to create any new streams and the local
+ * endpoint will be limited to creating streams with {@code stream identifier <= lastStreamId}. This may result in
+ * sending a {@code GO_AWAY} frame (assuming we have not already sent one with
+ * {@code Last-Stream-ID <= lastStreamId}), or may just return success if a {@code GO_AWAY} has previously been
+ * sent.
* @param ctx The context used for communication and buffer allocation if necessary.
* @param lastStreamId The last stream that the local endpoint is claiming it will accept.
* @param errorCode The rational as to why the connection is being closed. See {@link Http2Error}.
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
index 899af813d4d..832ff66f401 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
@@ -301,14 +301,27 @@ public Http2Stream answer(InvocationOnMock in) throws Throwable {
verify(ctx, times(1)).close(any(ChannelPromise.class));
}
+ @SuppressWarnings("unchecked")
+ @Test
public void canSendGoAwayFrame() throws Exception {
- handler = newHandler();
ByteBuf data = mock(ByteBuf.class);
long errorCode = Http2Error.INTERNAL_ERROR.code();
+ when(future.isDone()).thenReturn(true);
+ when(future.isSuccess()).thenReturn(true);
+ when(frameWriter.writeGoAway(eq(ctx), eq(STREAM_ID), eq(errorCode), eq(data), eq(promise))).thenReturn(future);
+ doAnswer(new Answer<Void>() {
+ @Override
+ public Void answer(InvocationOnMock invocation) throws Throwable {
+ invocation.getArgumentAt(0, GenericFutureListener.class).operationComplete(future);
+ return null;
+ }
+ }).when(future).addListener(any(GenericFutureListener.class));
+ handler = newHandler();
handler.goAway(ctx, STREAM_ID, errorCode, data, promise);
verify(connection).goAwaySent(eq(STREAM_ID), eq(errorCode), eq(data));
verify(frameWriter).writeGoAway(eq(ctx), eq(STREAM_ID), eq(errorCode), eq(data), eq(promise));
+ verify(ctx).close();
}
@Test
| test | train | 2015-04-29T08:43:06 | 2015-04-17T17:51:46Z | Scottmitch | val |
netty/netty/3680_3695 | netty/netty | netty/netty/3680 | netty/netty/3695 | [
"timestamp(timedelta=13.0, similarity=0.8939241132547782)"
] | a958e9b5d5d9d03b5c85a602c061d44ea25d7a52 | e4fa82b2a92aa5d8e74c60670b42cb3feb62218e | [
"@spinscale thanks for reporting. Let me fix this.\n",
"Actually the scope of this needs to be broaden as we need to also ensure property access works in the static initializer. So this will be a bigger change.\n",
"@trustin @Scottmitch FYI\n",
"@normanmaurer - Lets fix this one and any others we can think of... | [] | 2015-04-25T15:58:59Z | [
"defect"
] | Enabled SecurityManager results in ClassNotFoundError during io.netty.util.NetUtil initialization | Hey,
Netty version used: 4.0.27
Setup: An application that has the security manager enabled, spits an not really helpful `ClassNotFoundError` in case that the application is not allowed to read a file due to the security manager. The reason for this is the try to read the file `/proc/sys/net/core/somaxconn` in https://github.com/netty/netty/blob/4.0/common/src/main/java/io/netty/util/NetUtil.java#L238
Exception looks like this:
```
Caused by: io.netty.channel.ChannelException: Unable to create Channel from class class io.netty.channel.socket.nio.NioServerSocketChannel
at io.netty.bootstrap.AbstractBootstrap$BootstrapChannelFactory.newChannel(AbstractBootstrap.java:455)
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:306)
at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:271)
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:267)
at org.elasticsearch.http.netty.NettyHttpServerTransport$1.onPortNumber(NettyHttpServerTransport.java:261)
at org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:69)
at org.elasticsearch.http.netty.NettyHttpServerTransport.doStart(NettyHttpServerTransport.java:257)
... 45 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class io.netty.util.NetUtil
at io.netty.channel.socket.DefaultServerSocketChannelConfig.<init>(DefaultServerSocketChannelConfig.java:39)
at io.netty.channel.socket.nio.NioServerSocketChannel$NioServerSocketChannelConfig.<init>(NioServerSocketChannel.java:189)
at io.netty.channel.socket.nio.NioServerSocketChannel$NioServerSocketChannelConfig.<init>(NioServerSocketChannel.java:187)
at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:85)
at io.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:70)
at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at java.lang.Class.newInstance(Class.java:442)
at io.netty.bootstrap.AbstractBootstrap$BootstrapChannelFactory.newChannel(AbstractBootstrap.java:453)
... 51 more
```
if you need more infos, drop me a note, I am happy to help!
| [
"common/src/main/java/io/netty/util/NetUtil.java"
] | [
"common/src/main/java/io/netty/util/NetUtil.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/NetUtil.java b/common/src/main/java/io/netty/util/NetUtil.java
index 00958c45b27..436da329a10 100644
--- a/common/src/main/java/io/netty/util/NetUtil.java
+++ b/common/src/main/java/io/netty/util/NetUtil.java
@@ -28,6 +28,8 @@
import java.net.NetworkInterface;
import java.net.SocketException;
import java.net.UnknownHostException;
+import java.security.AccessController;
+import java.security.PrivilegedAction;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.List;
@@ -230,38 +232,45 @@ public final class NetUtil {
LOOPBACK_IF = loopbackIface;
LOCALHOST = loopbackAddr;
- // Determine the default somaxconn (server socket backlog) value of the platform.
- // The known defaults:
- // - Windows NT Server 4.0+: 200
- // - Linux and Mac OS X: 128
- int somaxconn = PlatformDependent.isWindows() ? 200 : 128;
- File file = new File("/proc/sys/net/core/somaxconn");
- if (file.exists()) {
- BufferedReader in = null;
- try {
- in = new BufferedReader(new FileReader(file));
- somaxconn = Integer.parseInt(in.readLine());
- if (logger.isDebugEnabled()) {
- logger.debug("{}: {}", file, somaxconn);
- }
- } catch (Exception e) {
- logger.debug("Failed to get SOMAXCONN from: {}", file, e);
- } finally {
- if (in != null) {
+ // As a SecurityManager may prevent reading the somaxconn file we wrap this in a privileged block.
+ //
+ // See https://github.com/netty/netty/issues/3680
+ SOMAXCONN = AccessController.doPrivileged(new PrivilegedAction<Integer>() {
+ @Override
+ public Integer run() {
+ // Determine the default somaxconn (server socket backlog) value of the platform.
+ // The known defaults:
+ // - Windows NT Server 4.0+: 200
+ // - Linux and Mac OS X: 128
+ int somaxconn = PlatformDependent.isWindows() ? 200 : 128;
+ File file = new File("/proc/sys/net/core/somaxconn");
+ if (file.exists()) {
+ BufferedReader in = null;
try {
- in.close();
+ in = new BufferedReader(new FileReader(file));
+ somaxconn = Integer.parseInt(in.readLine());
+ if (logger.isDebugEnabled()) {
+ logger.debug("{}: {}", file, somaxconn);
+ }
} catch (Exception e) {
- // Ignored.
+ logger.debug("Failed to get SOMAXCONN from: {}", file, e);
+ } finally {
+ if (in != null) {
+ try {
+ in.close();
+ } catch (Exception e) {
+ // Ignored.
+ }
+ }
+ }
+ } else {
+ if (logger.isDebugEnabled()) {
+ logger.debug("{}: {} (non-existent)", file, somaxconn);
}
}
+ return somaxconn;
}
- } else {
- if (logger.isDebugEnabled()) {
- logger.debug("{}: {} (non-existent)", file, somaxconn);
- }
- }
-
- SOMAXCONN = somaxconn;
+ });
}
/**
| null | train | train | 2015-04-22T09:11:11 | 2015-04-21T11:32:33Z | spinscale | val |
netty/netty/3698_3708 | netty/netty | netty/netty/3698 | netty/netty/3708 | [
"timestamp(timedelta=57.0, similarity=0.8817081669286014)"
] | 55fbf007f04fbba7bf50028f3c8b35d6c5ea5947 | 753194b643a8d6643f6b61aca1acd4fb19a59bfa | [
"@benevans Current io.netty.channel.sctp.SctpMessage is hiding some controls and, it could have been done better. Initially it was designed to work with SIGTRAN protocols and I didn't include ordering/completeness flags, which were not used at that time. Please feel free to send your pull request.\n",
"@benevan... | [] | 2015-04-29T13:55:07Z | [
"feature"
] | Cannot use SCTP unordered flag | Some applications, e.g. SIP over SCTP ([RFC4168](https://tools.ietf.org/html/rfc4168#section-5.1)), require the use of SCTP's unordered delivery flag.
Currently in Netty there is no way to set the unordered flag on an outgoing message. I assume the place to do this would be in `io.netty.channel.sctp.SctpMessage`, alongside the payload protocol ID and stream ID.
I'm happy to submit a PR but just wanted to check first in case there was some reason why this wasn't implemented. This applies to Netty 4.0 and later.
| [
"transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java",
"transport-sctp/src/main/java/io/netty/channel/sctp/nio/NioSctpChannel.java",
"transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java",
"transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpMessageCompletionHand... | [
"transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java",
"transport-sctp/src/main/java/io/netty/channel/sctp/nio/NioSctpChannel.java",
"transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java",
"transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpMessageCompletionHand... | [
"testsuite/src/main/java/io/netty/testsuite/transport/sctp/SctpEchoTest.java"
] | diff --git a/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java b/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java
index 6938f3bb3dd..f10f8eea56c 100644
--- a/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java
+++ b/transport-sctp/src/main/java/io/netty/channel/sctp/SctpMessage.java
@@ -26,6 +26,7 @@
public final class SctpMessage extends DefaultByteBufHolder {
private final int streamIdentifier;
private final int protocolIdentifier;
+ private final boolean unordered;
private final MessageInfo msgInfo;
@@ -36,9 +37,21 @@ public final class SctpMessage extends DefaultByteBufHolder {
* @param payloadBuffer channel buffer
*/
public SctpMessage(int protocolIdentifier, int streamIdentifier, ByteBuf payloadBuffer) {
+ this(protocolIdentifier, streamIdentifier, false, payloadBuffer);
+ }
+
+ /**
+ * Essential data that is being carried within SCTP Data Chunk
+ * @param protocolIdentifier of payload
+ * @param streamIdentifier that you want to send the payload
+ * @param unordered if {@literal true}, the SCTP Data Chunk will be sent with the U (unordered) flag set.
+ * @param payloadBuffer channel buffer
+ */
+ public SctpMessage(int protocolIdentifier, int streamIdentifier, boolean unordered, ByteBuf payloadBuffer) {
super(payloadBuffer);
this.protocolIdentifier = protocolIdentifier;
this.streamIdentifier = streamIdentifier;
+ this.unordered = unordered;
msgInfo = null;
}
@@ -55,6 +68,7 @@ public SctpMessage(MessageInfo msgInfo, ByteBuf payloadBuffer) {
this.msgInfo = msgInfo;
streamIdentifier = msgInfo.streamNumber();
protocolIdentifier = msgInfo.payloadProtocolID();
+ unordered = msgInfo.isUnordered();
}
/**
@@ -71,6 +85,13 @@ public int protocolIdentifier() {
return protocolIdentifier;
}
+ /**
+ * return the unordered flag
+ */
+ public boolean isUnordered() {
+ return unordered;
+ }
+
/**
* Return the {@link MessageInfo} for inbound messages or {@code null} for
* outbound messages.
@@ -111,6 +132,10 @@ public boolean equals(Object o) {
return false;
}
+ if (unordered != sctpFrame.unordered) {
+ return false;
+ }
+
if (!content().equals(sctpFrame.content())) {
return false;
}
@@ -129,7 +154,7 @@ public int hashCode() {
@Override
public SctpMessage copy() {
if (msgInfo == null) {
- return new SctpMessage(protocolIdentifier, streamIdentifier, content().copy());
+ return new SctpMessage(protocolIdentifier, streamIdentifier, unordered, content().copy());
} else {
return new SctpMessage(msgInfo, content().copy());
}
@@ -138,7 +163,7 @@ public SctpMessage copy() {
@Override
public SctpMessage duplicate() {
if (msgInfo == null) {
- return new SctpMessage(protocolIdentifier, streamIdentifier, content().duplicate());
+ return new SctpMessage(protocolIdentifier, streamIdentifier, unordered, content().duplicate());
} else {
return new SctpMessage(msgInfo, content().copy());
}
@@ -161,10 +186,12 @@ public String toString() {
if (refCnt() == 0) {
return "SctpFrame{" +
"streamIdentifier=" + streamIdentifier + ", protocolIdentifier=" + protocolIdentifier +
+ ", unordered=" + unordered +
", data=(FREED)}";
}
return "SctpFrame{" +
"streamIdentifier=" + streamIdentifier + ", protocolIdentifier=" + protocolIdentifier +
+ ", unordered=" + unordered +
", data=" + ByteBufUtil.hexDump(content()) + '}';
}
}
diff --git a/transport-sctp/src/main/java/io/netty/channel/sctp/nio/NioSctpChannel.java b/transport-sctp/src/main/java/io/netty/channel/sctp/nio/NioSctpChannel.java
index 8542ce5fbef..8ce914a2b65 100644
--- a/transport-sctp/src/main/java/io/netty/channel/sctp/nio/NioSctpChannel.java
+++ b/transport-sctp/src/main/java/io/netty/channel/sctp/nio/NioSctpChannel.java
@@ -320,6 +320,7 @@ protected boolean doWriteMessage(Object msg, ChannelOutboundBuffer in) throws Ex
final MessageInfo mi = MessageInfo.createOutgoing(association(), null, packet.streamIdentifier());
mi.payloadProtocolID(packet.protocolIdentifier());
mi.streamNumber(packet.streamIdentifier());
+ mi.unordered(packet.isUnordered());
final int writtenBytes = javaChannel().send(nioData, mi);
return writtenBytes > 0;
@@ -334,7 +335,8 @@ protected final Object filterOutboundMessage(Object msg) throws Exception {
return m;
}
- return new SctpMessage(m.protocolIdentifier(), m.streamIdentifier(), newDirectBuffer(m, buf));
+ return new SctpMessage(m.protocolIdentifier(), m.streamIdentifier(), m.isUnordered(),
+ newDirectBuffer(m, buf));
}
throw new UnsupportedOperationException(
diff --git a/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java b/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java
index e12981f7a6e..6416ea4e60e 100755
--- a/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java
+++ b/transport-sctp/src/main/java/io/netty/channel/sctp/oio/OioSctpChannel.java
@@ -263,6 +263,7 @@ protected void doWrite(ChannelOutboundBuffer in) throws Exception {
final MessageInfo mi = MessageInfo.createOutgoing(association(), null, packet.streamIdentifier());
mi.payloadProtocolID(packet.protocolIdentifier());
mi.streamNumber(packet.streamIdentifier());
+ mi.unordered(packet.isUnordered());
ch.send(nioData, mi);
written ++;
diff --git a/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpMessageCompletionHandler.java b/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpMessageCompletionHandler.java
index 30559e501f2..01f5ae3c4f8 100644
--- a/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpMessageCompletionHandler.java
+++ b/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpMessageCompletionHandler.java
@@ -41,6 +41,7 @@ protected void decode(ChannelHandlerContext ctx, SctpMessage msg, List<Object> o
final int protocolIdentifier = msg.protocolIdentifier();
final int streamIdentifier = msg.streamIdentifier();
final boolean isComplete = msg.isComplete();
+ final boolean isUnordered = msg.isUnordered();
ByteBuf frag;
if (fragments.containsKey(streamIdentifier)) {
@@ -61,6 +62,7 @@ protected void decode(ChannelHandlerContext ctx, SctpMessage msg, List<Object> o
SctpMessage assembledMsg = new SctpMessage(
protocolIdentifier,
streamIdentifier,
+ isUnordered,
Unpooled.wrappedBuffer(frag, byteBuf));
out.add(assembledMsg);
} else {
diff --git a/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpOutboundByteStreamHandler.java b/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpOutboundByteStreamHandler.java
index 26cecaca463..1cd098be58c 100644
--- a/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpOutboundByteStreamHandler.java
+++ b/transport-sctp/src/main/java/io/netty/handler/codec/sctp/SctpOutboundByteStreamHandler.java
@@ -25,23 +25,34 @@
/**
* A ChannelHandler which transform {@link ByteBuf} to {@link SctpMessage} and send it through a specific stream
* with given protocol identifier.
- *
+ * Unordered delivery of all messages may be requested by passing unordered = true to the constructor.
*/
public class SctpOutboundByteStreamHandler extends MessageToMessageEncoder<ByteBuf> {
private final int streamIdentifier;
private final int protocolIdentifier;
+ private final boolean unordered;
/**
* @param streamIdentifier stream number, this should be >=0 or <= max stream number of the association.
* @param protocolIdentifier supported application protocol id.
*/
public SctpOutboundByteStreamHandler(int streamIdentifier, int protocolIdentifier) {
+ this(streamIdentifier, protocolIdentifier, false);
+ }
+
+ /**
+ * @param streamIdentifier stream number, this should be >=0 or <= max stream number of the association.
+ * @param protocolIdentifier supported application protocol id.
+ * @param unordered if {@literal true}, SCTP Data Chunks will be sent with the U (unordered) flag set.
+ */
+ public SctpOutboundByteStreamHandler(int streamIdentifier, int protocolIdentifier, boolean unordered) {
this.streamIdentifier = streamIdentifier;
this.protocolIdentifier = protocolIdentifier;
+ this.unordered = unordered;
}
@Override
protected void encode(ChannelHandlerContext ctx, ByteBuf msg, List<Object> out) throws Exception {
- out.add(new SctpMessage(streamIdentifier, protocolIdentifier, msg.retain()));
+ out.add(new SctpMessage(streamIdentifier, protocolIdentifier, unordered, msg.retain()));
}
}
| diff --git a/testsuite/src/main/java/io/netty/testsuite/transport/sctp/SctpEchoTest.java b/testsuite/src/main/java/io/netty/testsuite/transport/sctp/SctpEchoTest.java
index 0b126b8a136..067d8e6a671 100644
--- a/testsuite/src/main/java/io/netty/testsuite/transport/sctp/SctpEchoTest.java
+++ b/testsuite/src/main/java/io/netty/testsuite/transport/sctp/SctpEchoTest.java
@@ -53,10 +53,20 @@ public void testSimpleEcho() throws Throwable {
}
public void testSimpleEcho(ServerBootstrap sb, Bootstrap cb) throws Throwable {
- testSimpleEcho0(sb, cb);
+ testSimpleEcho0(sb, cb, false);
}
- private static void testSimpleEcho0(ServerBootstrap sb, Bootstrap cb) throws Throwable {
+ @Test
+ public void testSimpleEchoUnordered() throws Throwable {
+ Assume.assumeTrue(TestUtils.isSctpSupported());
+ run();
+ }
+
+ public void testSimpleEchoUnordered(ServerBootstrap sb, Bootstrap cb) throws Throwable {
+ testSimpleEcho0(sb, cb, true);
+ }
+
+ private static void testSimpleEcho0(ServerBootstrap sb, Bootstrap cb, final boolean unordered) throws Throwable {
final EchoHandler sh = new EchoHandler();
final EchoHandler ch = new EchoHandler();
@@ -66,7 +76,7 @@ public void initChannel(SctpChannel c) throws Exception {
c.pipeline().addLast(
new SctpMessageCompletionHandler(),
new SctpInboundByteStreamHandler(0, 0),
- new SctpOutboundByteStreamHandler(0, 0),
+ new SctpOutboundByteStreamHandler(0, 0, unordered),
sh);
}
});
@@ -76,7 +86,7 @@ public void initChannel(SctpChannel c) throws Exception {
c.pipeline().addLast(
new SctpMessageCompletionHandler(),
new SctpInboundByteStreamHandler(0, 0),
- new SctpOutboundByteStreamHandler(0, 0),
+ new SctpOutboundByteStreamHandler(0, 0, unordered),
ch);
}
});
| val | train | 2015-04-29T08:42:55 | 2015-04-28T10:10:53Z | benevans | val |
netty/netty/3717_3723 | netty/netty | netty/netty/3717 | netty/netty/3723 | [
"timestamp(timedelta=8946.0, similarity=1.0)"
] | 37dc6a41a6cbf5cfadf430a5773bb61bc6e2173b | 0fb3ab6469078b9891115a275aed8c369c367847 | [
"+1\n",
"SGTM. @louiscryan - are you working on a PR?\n",
"Not yet but feel free to assign to me.\n\nOn Fri, May 1, 2015 at 1:27 PM, Scott Mitchell notifications@github.com\nwrote:\n\n> SGTM. @louiscryan https://github.com/louiscryan - are you working on a\n> PR?\n> \n> —\n> Reply to this email directly or view... | [] | 2015-05-04T20:11:35Z | [
"feature"
] | Http2LocalFlowController.consumeBytes should indicate if write occurred | Calls to Http2LocalFlowController.consumeBytes can trigger the generation of a WINDOW_UPDATE to the remote side. Callers of this method need to know if they should flush because an update was sent
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java",
"microbench/src/main/java/io... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java",
"microbench/src/main/java/io... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java
index 6d6369b4927..30233bf0560 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowController.java
@@ -140,7 +140,7 @@ public void incrementWindowSize(ChannelHandlerContext ctx, Http2Stream stream, i
}
@Override
- public void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes)
+ public boolean consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes)
throws Http2Exception {
// Streams automatically consume all remaining bytes when they are closed, so just ignore
// if already closed.
@@ -152,9 +152,11 @@ public void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numB
throw new IllegalArgumentException("numBytes must be positive");
}
- connectionState().consumeBytes(ctx, numBytes);
- state(stream).consumeBytes(ctx, numBytes);
+ boolean windowUpdateSent = connectionState().consumeBytes(ctx, numBytes);
+ windowUpdateSent |= state(stream).consumeBytes(ctx, numBytes);
+ return windowUpdateSent;
}
+ return false;
}
@Override
@@ -374,10 +376,10 @@ private void returnProcessedBytes(int delta) throws Http2Exception {
}
@Override
- public void consumeBytes(ChannelHandlerContext ctx, int numBytes) throws Http2Exception {
+ public boolean consumeBytes(ChannelHandlerContext ctx, int numBytes) throws Http2Exception {
// Return the bytes processed and update the window.
returnProcessedBytes(numBytes);
- writeWindowUpdateIfNeeded(ctx);
+ return writeWindowUpdateIfNeeded(ctx);
}
@Override
@@ -386,15 +388,17 @@ public int unconsumedBytes() {
}
@Override
- public void writeWindowUpdateIfNeeded(ChannelHandlerContext ctx) throws Http2Exception {
+ public boolean writeWindowUpdateIfNeeded(ChannelHandlerContext ctx) throws Http2Exception {
if (endOfStream || initialStreamWindowSize <= 0) {
- return;
+ return false;
}
int threshold = (int) (initialStreamWindowSize * streamWindowUpdateRatio);
if (processedWindow <= threshold) {
writeWindowUpdate(ctx);
+ return true;
}
+ return false;
}
/**
@@ -444,12 +448,13 @@ public void incrementInitialStreamWindow(int delta) {
}
@Override
- public void writeWindowUpdateIfNeeded(ChannelHandlerContext ctx) throws Http2Exception {
+ public boolean writeWindowUpdateIfNeeded(ChannelHandlerContext ctx) throws Http2Exception {
throw new UnsupportedOperationException();
}
@Override
- public void consumeBytes(ChannelHandlerContext ctx, int numBytes) throws Http2Exception {
+ public boolean consumeBytes(ChannelHandlerContext ctx, int numBytes) throws Http2Exception {
+ return false;
}
@Override
@@ -503,10 +508,21 @@ private interface FlowState {
/**
* Updates the flow control window for this stream if it is appropriate.
+ *
+ * @return true if {@code WINDOW_UPDATE} was written, false otherwise.
*/
- void writeWindowUpdateIfNeeded(ChannelHandlerContext ctx) throws Http2Exception;
+ boolean writeWindowUpdateIfNeeded(ChannelHandlerContext ctx) throws Http2Exception;
- void consumeBytes(ChannelHandlerContext ctx, int numBytes) throws Http2Exception;
+ /**
+ * Indicates that the application has consumed {@code numBytes} from the connection or stream and is
+ * ready to receive more data.
+ *
+ * @param ctx the channel handler context to use when sending a {@code WINDOW_UPDATE} if appropriate
+ * @param numBytes the number of bytes to be returned to the flow control window.
+ * @return true if {@code WINDOW_UPDATE} was written, false otherwise.
+ * @throws Http2Exception
+ */
+ boolean consumeBytes(ChannelHandlerContext ctx, int numBytes) throws Http2Exception;
int unconsumedBytes();
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
index 2fd5c3f7988..00808137400 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
@@ -328,7 +328,7 @@ public void receiveFlowControlledFrame(ChannelHandlerContext ctx, Http2Stream st
}
@Override
- public void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes)
+ public boolean consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes)
throws Http2Exception {
Http2Decompressor decompressor = decompressor(stream);
Http2Decompressor copy = null;
@@ -339,7 +339,7 @@ public void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numB
// Convert the uncompressed consumed bytes to compressed (on the wire) bytes.
numBytes = decompressor.consumeProcessedBytes(numBytes);
}
- flowController.consumeBytes(ctx, stream, numBytes);
+ return flowController.consumeBytes(ctx, stream, numBytes);
} catch (Http2Exception e) {
if (copy != null) {
stream.setProperty(propertyKey, copy);
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java
index a06e42be0bd..bdccd119b9e 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2LocalFlowController.java
@@ -57,10 +57,11 @@ void receiveFlowControlledFrame(ChannelHandlerContext ctx, Http2Stream stream, B
* If {@code stream} is {@code null} or closed (i.e. {@link Http2Stream#state()} method returns {@link
* Http2Stream.State#CLOSED}), calling this method has no effect.
* @param numBytes the number of bytes to be returned to the flow control window.
+ * @return true if a {@code WINDOW_UPDATE} was sent, false otherwise.
* @throws Http2Exception if the number of bytes returned exceeds the {@link #unconsumedBytes(Http2Stream)} for the
* stream.
*/
- void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes) throws Http2Exception;
+ boolean consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes) throws Http2Exception;
/**
* The number of bytes for the given stream that have been received but not yet consumed by the
diff --git a/microbench/src/main/java/io/netty/microbench/http2/NoopHttp2LocalFlowController.java b/microbench/src/main/java/io/netty/microbench/http2/NoopHttp2LocalFlowController.java
index 816d4768fb5..045c66f6bbb 100644
--- a/microbench/src/main/java/io/netty/microbench/http2/NoopHttp2LocalFlowController.java
+++ b/microbench/src/main/java/io/netty/microbench/http2/NoopHttp2LocalFlowController.java
@@ -57,7 +57,8 @@ public void receiveFlowControlledFrame(ChannelHandlerContext ctx, Http2Stream st
}
@Override
- public void consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes) throws Http2Exception {
+ public boolean consumeBytes(ChannelHandlerContext ctx, Http2Stream stream, int numBytes) throws Http2Exception {
+ return false;
}
@Override
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
index 5fc4e0df87b..06051a74a81 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/DefaultHttp2LocalFlowControllerTest.java
@@ -18,6 +18,8 @@
import static io.netty.handler.codec.http2.Http2CodecUtil.CONNECTION_STREAM_ID;
import static io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_WINDOW_SIZE;
import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyInt;
import static org.mockito.Matchers.eq;
@@ -86,11 +88,11 @@ public void windowUpdateShouldSendOnceBytesReturned() throws Http2Exception {
receiveFlowControlledFrame(STREAM_ID, dataSize, 0, false);
// Return only a few bytes and verify that the WINDOW_UPDATE hasn't been sent.
- consumeBytes(STREAM_ID, 10);
+ assertFalse(consumeBytes(STREAM_ID, 10));
verifyWindowUpdateNotSent(CONNECTION_STREAM_ID);
// Return the rest and verify the WINDOW_UPDATE is sent.
- consumeBytes(STREAM_ID, dataSize - 10);
+ assertTrue(consumeBytes(STREAM_ID, dataSize - 10));
verifyWindowUpdateSent(STREAM_ID, dataSize);
verifyWindowUpdateSent(CONNECTION_STREAM_ID, dataSize);
}
@@ -110,7 +112,7 @@ public void windowUpdateShouldNotBeSentAfterEndOfStream() throws Http2Exception
verifyWindowUpdateNotSent(CONNECTION_STREAM_ID);
verifyWindowUpdateNotSent(STREAM_ID);
- consumeBytes(STREAM_ID, dataSize);
+ assertTrue(consumeBytes(STREAM_ID, dataSize));
verifyWindowUpdateSent(CONNECTION_STREAM_ID, dataSize);
verifyWindowUpdateNotSent(STREAM_ID);
}
@@ -123,7 +125,7 @@ public void halfWindowRemainingShouldUpdateAllWindows() throws Http2Exception {
// Don't set end-of-stream so we'll get a window update for the stream as well.
receiveFlowControlledFrame(STREAM_ID, dataSize, 0, false);
- consumeBytes(STREAM_ID, dataSize);
+ assertTrue(consumeBytes(STREAM_ID, dataSize));
verifyWindowUpdateSent(CONNECTION_STREAM_ID, windowDelta);
verifyWindowUpdateSent(STREAM_ID, windowDelta);
}
@@ -150,7 +152,7 @@ public void initialWindowUpdateShouldAllowMoreFrames() throws Http2Exception {
// Send the next frame and verify that the expected window updates were sent.
receiveFlowControlledFrame(STREAM_ID, initialWindowSize, 0, false);
- consumeBytes(STREAM_ID, initialWindowSize);
+ assertTrue(consumeBytes(STREAM_ID, initialWindowSize));
int delta = newInitialWindowSize - initialWindowSize;
verifyWindowUpdateSent(STREAM_ID, delta);
verifyWindowUpdateSent(CONNECTION_STREAM_ID, delta);
@@ -172,7 +174,7 @@ public void connectionWindowShouldAdjustWithMultipleStreams() throws Http2Except
verifyWindowUpdateNotSent(CONNECTION_STREAM_ID);
assertEquals(DEFAULT_WINDOW_SIZE - data1, window(STREAM_ID));
assertEquals(DEFAULT_WINDOW_SIZE - data1, window(CONNECTION_STREAM_ID));
- consumeBytes(STREAM_ID, data1);
+ assertTrue(consumeBytes(STREAM_ID, data1));
verifyWindowUpdateSent(STREAM_ID, data1);
verifyWindowUpdateSent(CONNECTION_STREAM_ID, data1);
@@ -191,8 +193,8 @@ public void connectionWindowShouldAdjustWithMultipleStreams() throws Http2Except
assertEquals(DEFAULT_WINDOW_SIZE - data1, window(STREAM_ID));
assertEquals(DEFAULT_WINDOW_SIZE - data1, window(newStreamId));
assertEquals(DEFAULT_WINDOW_SIZE - (data1 << 1), window(CONNECTION_STREAM_ID));
- consumeBytes(STREAM_ID, data1);
- consumeBytes(newStreamId, data2);
+ assertFalse(consumeBytes(STREAM_ID, data1));
+ assertTrue(consumeBytes(newStreamId, data2));
verifyWindowUpdateNotSent(STREAM_ID);
verifyWindowUpdateNotSent(newStreamId);
verifyWindowUpdateSent(CONNECTION_STREAM_ID, data1 + data2);
@@ -266,8 +268,8 @@ private void testRatio(float ratio, int newDefaultWindowSize, int newStreamId, b
assertEquals(DEFAULT_WINDOW_SIZE - data2, window(STREAM_ID));
assertEquals(newDefaultWindowSize - data1, window(newStreamId));
assertEquals(newDefaultWindowSize - data2 - data1, window(CONNECTION_STREAM_ID));
- consumeBytes(STREAM_ID, data2);
- consumeBytes(newStreamId, data1);
+ assertFalse(consumeBytes(STREAM_ID, data2));
+ assertTrue(consumeBytes(newStreamId, data1));
verifyWindowUpdateNotSent(STREAM_ID);
verifyWindowUpdateSent(newStreamId, data1);
verifyWindowUpdateSent(CONNECTION_STREAM_ID, data1 + data2);
@@ -305,8 +307,8 @@ private static ByteBuf dummyData(int size) {
return buffer;
}
- private void consumeBytes(int streamId, int numBytes) throws Http2Exception {
- controller.consumeBytes(ctx, stream(streamId), numBytes);
+ private boolean consumeBytes(int streamId, int numBytes) throws Http2Exception {
+ return controller.consumeBytes(ctx, stream(streamId), numBytes);
}
private void verifyWindowUpdateSent(int streamId, int windowSizeIncrement) {
| train | train | 2015-05-04T07:30:30 | 2015-05-01T19:54:17Z | louiscryan | val |
netty/netty/3707_3732 | netty/netty | netty/netty/3707 | netty/netty/3732 | [
"timestamp(timedelta=32.0, similarity=0.8681261088636044)"
] | 59209f16bfd736f3d1dc8e7da68cb50a5d74c24f | c00586edcd965db677fdcfacb094fb410040726e | [
"@doom369 can you show me the code ?\n",
"@normanmaurer, sure.\n\nServer : \n\n```\n private static SslContext initSslContext(String serverCertPath, String serverKeyPath, String serverPass,\n String clientCertPath,\n SslPro... | [
"Use `@RunWith(Parameterized.class)` instead.\n",
"Use `@RunWith(Parameterized.class)` instead.\n",
"`\"keycertChainFile\"`\n",
"`\"keyCertChainFile is not a file: \"`\n",
"Do we need to support `SSL_AIDX_DSA` either?\n",
"Can you show me an example?\n\n> Am 06.05.2015 um 07:56 schrieb Trustin Lee notific... | 2015-05-05T20:29:03Z | [
"feature"
] | Mutual SSL with OpenSSL | Hi guys,
I tried to use OpenSSL provider for mutual SSL auth and getting error :
```
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLException: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:346)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:229)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.handler.timeout.ReadTimeoutHandler.channelRead(ReadTimeoutHandler.java:150)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:618)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:329)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:250)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLException: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned
at io.netty.handler.ssl.OpenSslEngine.unwrap(OpenSslEngine.java:590)
at io.netty.handler.ssl.OpenSslEngine.unwrap(OpenSslEngine.java:660)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1114)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:981)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:934)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:315)
... 13 more
```
With JDK provider everything seems fine, with OpenSSL provider without mutual SSL also everything is working. Does OpenSSL provider mutual auth requires some additional configuration?
My code is almost same as in JdkSslEngineTest.class specially in mySetupMutualAuth method.
| [
"handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslClientContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java",
"handler/src/main/java/io/netty/handler/... | [
"handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslClientContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java",
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java",
"handler/src/main/java/io/netty/handler/... | [
"handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java",
"handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java",
"handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java"
] | diff --git a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
index e031bd54411..54ee2befbf6 100644
--- a/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/JdkSslContext.java
@@ -16,9 +16,7 @@
package io.netty.handler.ssl;
-import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
-import io.netty.buffer.ByteBufInputStream;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
@@ -28,8 +26,6 @@
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLEngine;
import javax.net.ssl.SSLSessionContext;
-import javax.net.ssl.TrustManagerFactory;
-import javax.security.auth.x500.X500Principal;
import java.io.File;
import java.io.IOException;
import java.security.InvalidAlgorithmParameterException;
@@ -40,8 +36,6 @@
import java.security.Security;
import java.security.UnrecoverableKeyException;
import java.security.cert.CertificateException;
-import java.security.cert.CertificateFactory;
-import java.security.cert.X509Certificate;
import java.security.spec.InvalidKeySpecException;
import java.util.ArrayList;
import java.util.Arrays;
@@ -325,39 +319,4 @@ protected static KeyManagerFactory buildKeyManagerFactory(File certChainFile,
return kmf;
}
-
- /**
- * Build a {@link TrustManagerFactory} from a certificate chain file.
- * @param certChainFile The certificate file to build from.
- * @param trustManagerFactory The existing {@link TrustManagerFactory} that will be used if not {@code null}.
- * @return A {@link TrustManagerFactory} which contains the certificates in {@code certChainFile}
- */
- protected static TrustManagerFactory buildTrustManagerFactory(File certChainFile,
- TrustManagerFactory trustManagerFactory)
- throws NoSuchAlgorithmException, CertificateException, KeyStoreException, IOException {
- KeyStore ks = KeyStore.getInstance("JKS");
- ks.load(null, null);
- CertificateFactory cf = CertificateFactory.getInstance("X.509");
-
- ByteBuf[] certs = PemReader.readCertificates(certChainFile);
- try {
- for (ByteBuf buf: certs) {
- X509Certificate cert = (X509Certificate) cf.generateCertificate(new ByteBufInputStream(buf));
- X500Principal principal = cert.getSubjectX500Principal();
- ks.setCertificateEntry(principal.getName("RFC2253"), cert);
- }
- } finally {
- for (ByteBuf buf: certs) {
- buf.release();
- }
- }
-
- // Set up trust manager factory to use our key store.
- if (trustManagerFactory == null) {
- trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
- }
- trustManagerFactory.init(ks);
-
- return trustManagerFactory;
- }
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslClientContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslClientContext.java
index d88e289b3e4..85f15d103b3 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslClientContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslClientContext.java
@@ -20,6 +20,7 @@
import org.apache.tomcat.jni.SSL;
import org.apache.tomcat.jni.SSLContext;
+import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLException;
import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
@@ -39,13 +40,12 @@
*/
public final class OpenSslClientContext extends OpenSslContext {
private final OpenSslSessionContext sessionContext;
- private final OpenSslEngineMap engineMap;
/**
* Creates a new instance.
*/
public OpenSslClientContext() throws SSLException {
- this(null, null, null, IdentityCipherSuiteFilter.INSTANCE, null, 0, 0);
+ this(null, null, null, null, null, null, null, IdentityCipherSuiteFilter.INSTANCE, null, 0, 0);
}
/**
@@ -79,7 +79,8 @@ public OpenSslClientContext(TrustManagerFactory trustManagerFactory) throws SSLE
* {@code null} to use the default.
*/
public OpenSslClientContext(File certChainFile, TrustManagerFactory trustManagerFactory) throws SSLException {
- this(certChainFile, trustManagerFactory, null, IdentityCipherSuiteFilter.INSTANCE, null, 0, 0);
+ this(certChainFile, trustManagerFactory, null, null, null, null, null,
+ IdentityCipherSuiteFilter.INSTANCE, null, 0, 0);
}
/**
@@ -104,11 +105,14 @@ public OpenSslClientContext(File certChainFile, TrustManagerFactory trustManager
public OpenSslClientContext(File certChainFile, TrustManagerFactory trustManagerFactory, Iterable<String> ciphers,
ApplicationProtocolConfig apn, long sessionCacheSize, long sessionTimeout)
throws SSLException {
- this(certChainFile, trustManagerFactory, ciphers, IdentityCipherSuiteFilter.INSTANCE,
+ this(certChainFile, trustManagerFactory, null, null, null, null, ciphers, IdentityCipherSuiteFilter.INSTANCE,
apn, sessionCacheSize, sessionTimeout);
}
/**
+ * @deprecated use {@link #OpenSslClientContext(File, TrustManagerFactory, File, File, String,
+ * KeyManagerFactory, Iterable, CipherSuiteFilter, ApplicationProtocolConfig,long, long)}
+ *
* Creates a new instance.
*
* @param certChainFile an X.509 certificate chain file in PEM format
@@ -124,29 +128,98 @@ public OpenSslClientContext(File certChainFile, TrustManagerFactory trustManager
* @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
* {@code 0} to use the default value.
*/
+ @Deprecated
public OpenSslClientContext(File certChainFile, TrustManagerFactory trustManagerFactory, Iterable<String> ciphers,
+ CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
+ this(certChainFile, trustManagerFactory, null, null, null, null,
+ ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ }
+
+ /**
+ * Creates a new instance.
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from servers.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the public key for mutual authentication.
+ * {@code null} to use the system default
+ * @param keyFile a PKCS#8 private key file in PEM format.
+ * This provides the private key for mutual authentication.
+ * {@code null} for no mutual authentication.
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * Ignored if {@code keyFile} is {@code null}.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link javax.net.ssl.KeyManager}s
+ * that is used to encrypt data being sent to servers.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * @param apn Application Protocol Negotiator object.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ */
+ public OpenSslClientContext(File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword,
+ KeyManagerFactory keyManagerFactory, Iterable<String> ciphers,
CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
long sessionCacheSize, long sessionTimeout)
throws SSLException {
super(ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout, SSL.SSL_MODE_CLIENT);
boolean success = false;
try {
- if (certChainFile != null && !certChainFile.isFile()) {
- throw new IllegalArgumentException("certChainFile is not a file: " + certChainFile);
+ if (trustCertChainFile != null && !trustCertChainFile.isFile()) {
+ throw new IllegalArgumentException("trustCertChainFile is not a file: " + trustCertChainFile);
+ }
+
+ if (keyCertChainFile != null && !keyCertChainFile.isFile()) {
+ throw new IllegalArgumentException("keyCertChainFile is not a file: " + keyCertChainFile);
}
+ if (keyFile != null && !keyFile.isFile()) {
+ throw new IllegalArgumentException("keyFile is not a file: " + keyFile);
+ }
+ if (keyFile == null && keyCertChainFile != null || keyFile != null && keyCertChainFile == null) {
+ throw new IllegalArgumentException(
+ "Either both keyCertChainFile and keyFile needs to be null or none of them");
+ }
synchronized (OpenSslContext.class) {
- if (certChainFile != null) {
+ if (trustCertChainFile != null) {
/* Load the certificate chain. We must skip the first cert when server mode */
- if (!SSLContext.setCertificateChainFile(ctx, certChainFile.getPath(), true)) {
+ if (!SSLContext.setCertificateChainFile(ctx, trustCertChainFile.getPath(), true)) {
long error = SSL.getLastErrorNumber();
if (OpenSsl.isError(error)) {
throw new SSLException(
"failed to set certificate chain: "
- + certChainFile + " (" + SSL.getErrorString(error) + ')');
+ + trustCertChainFile + " (" + SSL.getErrorString(error) + ')');
+ }
+ }
+ }
+ if (keyCertChainFile != null && keyFile != null) {
+ /* Load the certificate file and private key. */
+ try {
+ if (!SSLContext.setCertificate(
+ ctx, keyCertChainFile.getPath(), keyFile.getPath(), keyPassword, SSL.SSL_AIDX_RSA)) {
+ long error = SSL.getLastErrorNumber();
+ if (OpenSsl.isError(error)) {
+ throw new SSLException("failed to set certificate: " +
+ keyCertChainFile + " and " + keyFile +
+ " (" + SSL.getErrorString(error) + ')');
+ }
}
+ } catch (SSLException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new SSLException("failed to set certificate: " + keyCertChainFile + " and " + keyFile, e);
}
}
+
SSLContext.setVerify(ctx, SSL.SSL_VERIFY_NONE, VERIFY_DEPTH);
try {
@@ -155,25 +228,24 @@ public OpenSslClientContext(File certChainFile, TrustManagerFactory trustManager
trustManagerFactory = TrustManagerFactory.getInstance(
TrustManagerFactory.getDefaultAlgorithm());
}
- initTrustManagerFactory(certChainFile, trustManagerFactory);
+ initTrustManagerFactory(trustCertChainFile, trustManagerFactory);
final X509TrustManager manager = chooseTrustManager(trustManagerFactory.getTrustManagers());
- engineMap = newEngineMap(manager);
-
// Use this to prevent an error when running on java < 7
if (useExtendedTrustManager(manager)) {
final X509ExtendedTrustManager extendedManager = (X509ExtendedTrustManager) manager;
SSLContext.setCertVerifyCallback(ctx, new AbstractCertificateVerifier() {
@Override
- void verify(long ssl, X509Certificate[] peerCerts, String auth) throws Exception {
- OpenSslEngine engine = engineMap.remove(ssl);
+ void verify(OpenSslEngine engine, X509Certificate[] peerCerts, String auth)
+ throws Exception {
extendedManager.checkServerTrusted(peerCerts, auth, engine);
}
});
} else {
SSLContext.setCertVerifyCallback(ctx, new AbstractCertificateVerifier() {
@Override
- void verify(long ssl, X509Certificate[] peerCerts, String auth) throws Exception {
+ void verify(OpenSslEngine engine, X509Certificate[] peerCerts, String auth)
+ throws Exception {
manager.checkServerTrusted(peerCerts, auth);
}
});
@@ -218,11 +290,6 @@ public OpenSslSessionContext sessionContext() {
return sessionContext;
}
- @Override
- OpenSslEngineMap engineMap() {
- return engineMap;
- }
-
// No cache is currently supported for client side mode.
private static final class OpenSslClientSessionContext extends OpenSslSessionContext {
private OpenSslClientSessionContext(long context) {
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
index 06f9d510ccf..d642e223867 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslContext.java
@@ -26,6 +26,7 @@
import javax.net.ssl.SSLEngine;
import javax.net.ssl.SSLException;
+import javax.net.ssl.SSLHandshakeException;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509ExtendedTrustManager;
import javax.net.ssl.X509TrustManager;
@@ -56,6 +57,8 @@ public abstract class OpenSslContext extends SslContext {
private final List<String> unmodifiableCiphers;
private final long sessionCacheSize;
private final long sessionTimeout;
+ private final OpenSslEngineMap engineMap = new DefaultOpenSslEngineMap();
+
private final OpenSslApplicationProtocolNegotiator apn;
/** The OpenSSL SSL_CTX object */
protected final long ctx;
@@ -277,14 +280,11 @@ public final SSLEngine newEngine(ByteBufAllocator alloc, String peerHost, int pe
*/
@Override
public final SSLEngine newEngine(ByteBufAllocator alloc) {
- OpenSslEngineMap engineMap = engineMap();
final OpenSslEngine engine = new OpenSslEngine(ctx, alloc, isClient(), sessionContext(), apn, engineMap);
engineMap.add(engine);
return engine;
}
- abstract OpenSslEngineMap engineMap();
-
/**
* Returns the {@code SSL_CTX} object of this context.
*/
@@ -392,31 +392,28 @@ static OpenSslApplicationProtocolNegotiator toNegotiator(ApplicationProtocolConf
}
}
- static OpenSslEngineMap newEngineMap(X509TrustManager trustManager) {
- if (useExtendedTrustManager(trustManager)) {
- return new DefaultOpenSslEngineMap();
- }
- return OpenSslEngineMap.EMPTY;
- }
-
static boolean useExtendedTrustManager(X509TrustManager trustManager) {
return PlatformDependent.javaVersion() >= 7 && trustManager instanceof X509ExtendedTrustManager;
}
- abstract static class AbstractCertificateVerifier implements CertificateVerifier {
+ abstract class AbstractCertificateVerifier implements CertificateVerifier {
@Override
public final boolean verify(long ssl, byte[][] chain, String auth) {
X509Certificate[] peerCerts = certificates(chain);
+ final OpenSslEngine engine = engineMap.remove(ssl);
try {
- verify(ssl, peerCerts, auth);
+ verify(engine, peerCerts, auth);
return true;
- } catch (Exception e) {
- logger.debug("verification of certificate failed", e);
+ } catch (Throwable cause) {
+ logger.debug("verification of certificate failed", cause);
+ SSLHandshakeException e = new SSLHandshakeException("General OpenSslEngine problem");
+ e.initCause(cause);
+ engine.handshakeException = e;
}
return false;
}
- abstract void verify(long ssl, X509Certificate[] peerCerts, String auth) throws Exception;
+ abstract void verify(OpenSslEngine engine, X509Certificate[] peerCerts, String auth) throws Exception;
}
private static final class DefaultOpenSslEngineMap implements OpenSslEngineMap {
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
index ded15e93862..94dc55a9a16 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
@@ -29,6 +29,7 @@
import javax.net.ssl.SSLEngine;
import javax.net.ssl.SSLEngineResult;
import javax.net.ssl.SSLException;
+import javax.net.ssl.SSLHandshakeException;
import javax.net.ssl.SSLPeerUnverifiedException;
import javax.net.ssl.SSLSession;
import javax.net.ssl.SSLSessionBindingEvent;
@@ -160,6 +161,10 @@ enum ClientAuthMode {
private final OpenSslApplicationProtocolNegotiator apn;
private final SSLSession session = new OpenSslSession();
+ // This is package-private as we set it from OpenSslContext if an exception is thrown during
+ // the verification step.
+ SSLHandshakeException handshakeException;
+
/**
* Creates a new instance
*
@@ -489,6 +494,22 @@ public synchronized SSLEngineResult wrap(
return new SSLEngineResult(getEngineStatus(), handshakeStatus0(), bytesConsumed, bytesProduced);
}
+ private SSLException newSSLException(String msg) {
+ if (!handshakeFinished) {
+ return new SSLHandshakeException(msg);
+ }
+ return new SSLException(msg);
+ }
+
+ private void checkPendingHandshakeException() throws SSLHandshakeException {
+ if (handshakeException != null) {
+ SSLHandshakeException exception = handshakeException;
+ handshakeException = null;
+ shutdown();
+ throw exception;
+ }
+ }
+
public synchronized SSLEngineResult unwrap(
final ByteBuffer[] srcs, int srcsOffset, final int srcsLength,
final ByteBuffer[] dsts, final int dstsOffset, final int dstsLength) throws SSLException {
@@ -608,7 +629,9 @@ public synchronized SSLEngineResult unwrap(
// There was an internal error -- shutdown
shutdown();
- throw new SSLException(err);
+ throw newSSLException(err);
+ } else {
+ checkPendingHandshakeException();
}
}
} else {
@@ -954,8 +977,9 @@ private void handshake() throws SSLException {
// There was an internal error -- shutdown
shutdown();
- throw new SSLException(err);
+ throw newSSLException(err);
}
+ checkPendingHandshakeException();
} else {
// if SSL_do_handshake returns > 0 it means the handshake was finished. This means we can update
// handshakeFinished directly and so eliminate uncessary calls to SSL.isInInit(...)
@@ -1037,6 +1061,7 @@ private SSLEngineResult.HandshakeStatus handshakeStatus0() throws SSLException {
if (status == FINISHED) {
handshakeFinished();
}
+ checkPendingHandshakeException();
return status;
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java
index 207c0f0355b..83ee505eaa9 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslServerContext.java
@@ -19,7 +19,10 @@
import org.apache.tomcat.jni.SSL;
import org.apache.tomcat.jni.SSLContext;
+import javax.net.ssl.KeyManager;
+import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLException;
+import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
import javax.net.ssl.X509ExtendedTrustManager;
import javax.net.ssl.X509TrustManager;
@@ -34,7 +37,6 @@
*/
public final class OpenSslServerContext extends OpenSslContext {
private final OpenSslServerSessionContext sessionContext;
- private final OpenSslEngineMap engineMap;
/**
* Creates a new instance.
@@ -55,8 +57,8 @@ public OpenSslServerContext(File certChainFile, File keyFile) throws SSLExceptio
* {@code null} if it's not password-protected.
*/
public OpenSslServerContext(File certChainFile, File keyFile, String keyPassword) throws SSLException {
- this(certChainFile, keyFile, keyPassword, null, null, IdentityCipherSuiteFilter.INSTANCE,
- NONE_PROTOCOL_NEGOTIATOR, 0, 0);
+ this(certChainFile, keyFile, keyPassword, null, IdentityCipherSuiteFilter.INSTANCE,
+ ApplicationProtocolConfig.DISABLED, 0, 0);
}
/**
@@ -82,8 +84,8 @@ public OpenSslServerContext(
File certChainFile, File keyFile, String keyPassword,
Iterable<String> ciphers, ApplicationProtocolConfig apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
- this(certChainFile, keyFile, keyPassword, null, ciphers,
- toNegotiator(apn), sessionCacheSize, sessionTimeout);
+ this(certChainFile, keyFile, keyPassword, ciphers, IdentityCipherSuiteFilter.INSTANCE,
+ apn, sessionCacheSize, sessionTimeout);
}
/**
@@ -128,9 +130,8 @@ public OpenSslServerContext(
* {@code 0} to use the default value.
* @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
* {@code 0} to use the default value.
- * @deprecated use {@link #OpenSslServerContext(
- * File, File, String, TrustManagerFactory, Iterable,
- * CipherSuiteFilter, ApplicationProtocolConfig, long, long)}
+ * @deprecated use {@link #OpenSslServerContext(File, TrustManagerFactory, File, File, String, KeyManagerFactory,
+ * Iterable, CipherSuiteFilter, ApplicationProtocolConfig, long, long)}
*/
@Deprecated
public OpenSslServerContext(
@@ -155,17 +156,16 @@ public OpenSslServerContext(
* {@code 0} to use the default value.
* @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
* {@code 0} to use the default value.
- * @deprecated use {@link #OpenSslServerContext(
- * File, File, String, TrustManagerFactory, Iterable,
- * CipherSuiteFilter, OpenSslApplicationProtocolNegotiator, long, long)}
+ * @deprecated use {@link #OpenSslServerContext(File, TrustManagerFactory, File, File, String, KeyManagerFactory,
+ * Iterable, CipherSuiteFilter, ApplicationProtocolConfig, long, long)}
*/
@Deprecated
public OpenSslServerContext(
File certChainFile, File keyFile, String keyPassword, TrustManagerFactory trustManagerFactory,
Iterable<String> ciphers, OpenSslApplicationProtocolNegotiator apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
- this(certChainFile, keyFile, keyPassword, trustManagerFactory, ciphers,
- IdentityCipherSuiteFilter.INSTANCE, apn, sessionCacheSize, sessionTimeout);
+ this(null, trustManagerFactory, certChainFile, keyFile, keyPassword, null,
+ ciphers, null, apn, sessionCacheSize, sessionTimeout);
}
/**
@@ -188,32 +188,69 @@ public OpenSslServerContext(
File certChainFile, File keyFile, String keyPassword,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
- this(certChainFile, keyFile, keyPassword, null, ciphers, cipherFilter,
- toNegotiator(apn), sessionCacheSize, sessionTimeout);
+ this(null, null, certChainFile, keyFile, keyPassword, null,
+ ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
}
/**
* Creates a new instance.
*
- * @param certChainFile an X.509 certificate chain file in PEM format
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the certificate chains used for mutual authentication.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from clients.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}.
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format
* @param keyFile a PKCS#8 private key file in PEM format
* @param keyPassword the password of the {@code keyFile}.
* {@code null} if it's not password-protected.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to clients.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
* @param ciphers the cipher suites to enable, in the order of preference.
* {@code null} to use the default cipher suites.
* @param cipherFilter a filter to apply over the supplied list of ciphers
- * @param config Application protocol config.
+ * Only required if {@code provider} is {@link SslProvider#JDK}
+ * @param config Provides a means to configure parameters related to application protocol negotiation.
* @param sessionCacheSize the size of the cache used for storing SSL session objects.
* {@code 0} to use the default value.
* @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
* {@code 0} to use the default value.
*/
public OpenSslServerContext(
- File certChainFile, File keyFile, String keyPassword, TrustManagerFactory trustManagerFactory,
+ File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, ApplicationProtocolConfig config,
long sessionCacheSize, long sessionTimeout) throws SSLException {
- this(certChainFile, keyFile, keyPassword, trustManagerFactory, ciphers, cipherFilter,
- toNegotiator(config), sessionCacheSize, sessionTimeout);
+ this(trustCertChainFile, trustManagerFactory, keyCertChainFile, keyFile, keyPassword, keyManagerFactory,
+ ciphers, cipherFilter, toNegotiator(config), sessionCacheSize, sessionTimeout);
+ }
+
+ /**
+ * Creates a new instance.
+ *
+ * @param certChainFile an X.509 certificate chain file in PEM format
+ * @param keyFile a PKCS#8 private key file in PEM format
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * @param config Application protocol config.
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ */
+ @Deprecated
+ public OpenSslServerContext(File certChainFile, File keyFile, String keyPassword,
+ TrustManagerFactory trustManagerFactory, Iterable<String> ciphers,
+ CipherSuiteFilter cipherFilter, ApplicationProtocolConfig config,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
+ this(null, trustManagerFactory, certChainFile, keyFile, keyPassword, null, ciphers, cipherFilter,
+ toNegotiator(config), sessionCacheSize, sessionTimeout);
}
/**
@@ -231,21 +268,61 @@ public OpenSslServerContext(
* {@code 0} to use the default value.
* @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
* {@code 0} to use the default value.
+ * @deprecated use {@link #OpenSslServerContext(File, TrustManagerFactory, File, File, String, KeyManagerFactory,
+ * Iterable, CipherSuiteFilter, OpenSslApplicationProtocolNegotiator, long, long)}
*/
+ @Deprecated
public OpenSslServerContext(
File certChainFile, File keyFile, String keyPassword, TrustManagerFactory trustManagerFactory,
Iterable<String> ciphers, CipherSuiteFilter cipherFilter, OpenSslApplicationProtocolNegotiator apn,
long sessionCacheSize, long sessionTimeout) throws SSLException {
+ this(null, trustManagerFactory, certChainFile, keyFile, keyPassword, null, ciphers, cipherFilter,
+ apn, sessionCacheSize, sessionTimeout);
+ }
+
+ /**
+ * Creates a new instance.
+ *
+ *
+ * @param trustCertChainFile an X.509 certificate chain file in PEM format.
+ * This provides the certificate chains used for mutual authentication.
+ * {@code null} to use the system default
+ * @param trustManagerFactory the {@link TrustManagerFactory} that provides the {@link TrustManager}s
+ * that verifies the certificates sent from clients.
+ * {@code null} to use the default or the results of parsing {@code trustCertChainFile}.
+ * @param keyCertChainFile an X.509 certificate chain file in PEM format
+ * @param keyFile a PKCS#8 private key file in PEM format
+ * @param keyPassword the password of the {@code keyFile}.
+ * {@code null} if it's not password-protected.
+ * @param keyManagerFactory the {@link KeyManagerFactory} that provides the {@link KeyManager}s
+ * that is used to encrypt data being sent to clients.
+ * {@code null} to use the default or the results of parsing
+ * {@code keyCertChainFile} and {@code keyFile}.
+ * @param ciphers the cipher suites to enable, in the order of preference.
+ * {@code null} to use the default cipher suites.
+ * @param cipherFilter a filter to apply over the supplied list of ciphers
+ * Only required if {@code provider} is {@link SslProvider#JDK}
+ * @param apn Application Protocol Negotiator object
+ * @param sessionCacheSize the size of the cache used for storing SSL session objects.
+ * {@code 0} to use the default value.
+ * @param sessionTimeout the timeout for the cached SSL session objects, in seconds.
+ * {@code 0} to use the default value.
+ */
+ public OpenSslServerContext(
+ File trustCertChainFile, TrustManagerFactory trustManagerFactory,
+ File keyCertChainFile, File keyFile, String keyPassword, KeyManagerFactory keyManagerFactory,
+ Iterable<String> ciphers, CipherSuiteFilter cipherFilter, OpenSslApplicationProtocolNegotiator apn,
+ long sessionCacheSize, long sessionTimeout) throws SSLException {
super(ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout, SSL.SSL_MODE_SERVER);
OpenSsl.ensureAvailability();
- checkNotNull(certChainFile, "certChainFile");
- if (!certChainFile.isFile()) {
- throw new IllegalArgumentException("certChainFile is not a file: " + certChainFile);
+ checkNotNull(keyCertChainFile, "keyCertChainFile");
+ if (!keyCertChainFile.isFile()) {
+ throw new IllegalArgumentException("keyCertChainFile is not a file: " + keyCertChainFile);
}
checkNotNull(keyFile, "keyFile");
if (!keyFile.isFile()) {
- throw new IllegalArgumentException("keyPath is not a file: " + keyFile);
+ throw new IllegalArgumentException("keyFile is not a file: " + keyFile);
}
if (keyPassword == null) {
keyPassword = "";
@@ -259,59 +336,64 @@ public OpenSslServerContext(
SSLContext.setVerify(ctx, SSL.SSL_CVERIFY_NONE, VERIFY_DEPTH);
/* Load the certificate chain. We must skip the first cert when server mode */
- if (!SSLContext.setCertificateChainFile(ctx, certChainFile.getPath(), true)) {
+ if (!SSLContext.setCertificateChainFile(ctx, keyCertChainFile.getPath(), true)) {
long error = SSL.getLastErrorNumber();
if (OpenSsl.isError(error)) {
String err = SSL.getErrorString(error);
throw new SSLException(
- "failed to set certificate chain: " + certChainFile + " (" + err + ')');
+ "failed to set certificate chain: " + keyCertChainFile + " (" + err + ')');
}
}
/* Load the certificate file and private key. */
try {
if (!SSLContext.setCertificate(
- ctx, certChainFile.getPath(), keyFile.getPath(), keyPassword, SSL.SSL_AIDX_RSA)) {
+ ctx, keyCertChainFile.getPath(), keyFile.getPath(), keyPassword, SSL.SSL_AIDX_RSA)) {
long error = SSL.getLastErrorNumber();
if (OpenSsl.isError(error)) {
String err = SSL.getErrorString(error);
throw new SSLException("failed to set certificate: " +
- certChainFile + " and " + keyFile + " (" + err + ')');
+ keyCertChainFile + " and " + keyFile + " (" + err + ')');
}
}
} catch (SSLException e) {
throw e;
} catch (Exception e) {
- throw new SSLException("failed to set certificate: " + certChainFile + " and " + keyFile, e);
+ throw new SSLException("failed to set certificate: " + keyCertChainFile + " and " + keyFile, e);
}
try {
- char[] keyPasswordChars = keyPassword == null ? EmptyArrays.EMPTY_CHARS : keyPassword.toCharArray();
-
- KeyStore ks = buildKeyStore(certChainFile, keyFile, keyPasswordChars);
if (trustManagerFactory == null) {
// Mimic the way SSLContext.getInstance(KeyManager[], null, null) works
trustManagerFactory = TrustManagerFactory.getInstance(
TrustManagerFactory.getDefaultAlgorithm());
}
- trustManagerFactory.init(ks);
+ if (trustCertChainFile != null) {
+ trustManagerFactory = buildTrustManagerFactory(trustCertChainFile, trustManagerFactory);
+ } else {
+ char[] keyPasswordChars =
+ keyPassword == null ? EmptyArrays.EMPTY_CHARS : keyPassword.toCharArray();
+
+ KeyStore ks = buildKeyStore(keyCertChainFile, keyFile, keyPasswordChars);
+ trustManagerFactory.init(ks);
+ }
final X509TrustManager manager = chooseTrustManager(trustManagerFactory.getTrustManagers());
- engineMap = newEngineMap(manager);
// Use this to prevent an error when running on java < 7
if (useExtendedTrustManager(manager)) {
final X509ExtendedTrustManager extendedManager = (X509ExtendedTrustManager) manager;
SSLContext.setCertVerifyCallback(ctx, new AbstractCertificateVerifier() {
@Override
- void verify(long ssl, X509Certificate[] peerCerts, String auth) throws Exception {
- OpenSslEngine engine = engineMap.remove(ssl);
+ void verify(OpenSslEngine engine, X509Certificate[] peerCerts, String auth)
+ throws Exception {
extendedManager.checkClientTrusted(peerCerts, auth, engine);
}
});
} else {
SSLContext.setCertVerifyCallback(ctx, new AbstractCertificateVerifier() {
@Override
- void verify(long ssl, X509Certificate[] peerCerts, String auth) throws Exception {
+ void verify(OpenSslEngine engine, X509Certificate[] peerCerts, String auth)
+ throws Exception {
manager.checkClientTrusted(peerCerts, auth);
}
});
@@ -333,9 +415,4 @@ void verify(long ssl, X509Certificate[] peerCerts, String auth) throws Exception
public OpenSslServerSessionContext sessionContext() {
return sessionContext;
}
-
- @Override
- OpenSslEngineMap engineMap() {
- return engineMap;
- }
}
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslContext.java b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
index bcf428694df..890b36242bf 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
@@ -39,6 +39,7 @@
import javax.net.ssl.SSLSessionContext;
import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
+import javax.security.auth.x500.X500Principal;
import java.io.File;
import java.io.IOException;
import java.security.InvalidAlgorithmParameterException;
@@ -52,6 +53,7 @@
import java.security.cert.Certificate;
import java.security.cert.CertificateException;
import java.security.cert.CertificateFactory;
+import java.security.cert.X509Certificate;
import java.security.spec.InvalidKeySpecException;
import java.security.spec.PKCS8EncodedKeySpec;
import java.util.ArrayList;
@@ -399,8 +401,8 @@ static SslContext newServerContextInternal(
keyManagerFactory, ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
case OPENSSL:
return new OpenSslServerContext(
- keyCertChainFile, keyFile, keyPassword, trustManagerFactory,
- ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
+ trustCertChainFile, trustManagerFactory, keyCertChainFile, keyFile, keyPassword,
+ keyManagerFactory, ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
default:
throw new Error(provider.toString());
}
@@ -729,8 +731,8 @@ static SslContext newClientContextInternal(
keyManagerFactory, ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
case OPENSSL:
return new OpenSslClientContext(
- trustCertChainFile, trustManagerFactory, ciphers, cipherFilter, apn,
- sessionCacheSize, sessionTimeout);
+ trustCertChainFile, trustManagerFactory, keyCertChainFile, keyFile, keyPassword,
+ keyManagerFactory, ciphers, cipherFilter, apn, sessionCacheSize, sessionTimeout);
}
// Should never happen!!
throw new Error();
@@ -925,4 +927,39 @@ static KeyStore buildKeyStore(File certChainFile, File keyFile, char[] keyPasswo
ks.setKeyEntry("key", key, keyPasswordChars, certChain.toArray(new Certificate[certChain.size()]));
return ks;
}
+
+ /**
+ * Build a {@link TrustManagerFactory} from a certificate chain file.
+ * @param certChainFile The certificate file to build from.
+ * @param trustManagerFactory The existing {@link TrustManagerFactory} that will be used if not {@code null}.
+ * @return A {@link TrustManagerFactory} which contains the certificates in {@code certChainFile}
+ */
+ protected static TrustManagerFactory buildTrustManagerFactory(File certChainFile,
+ TrustManagerFactory trustManagerFactory)
+ throws NoSuchAlgorithmException, CertificateException, KeyStoreException, IOException {
+ KeyStore ks = KeyStore.getInstance("JKS");
+ ks.load(null, null);
+ CertificateFactory cf = CertificateFactory.getInstance("X.509");
+
+ ByteBuf[] certs = PemReader.readCertificates(certChainFile);
+ try {
+ for (ByteBuf buf: certs) {
+ X509Certificate cert = (X509Certificate) cf.generateCertificate(new ByteBufInputStream(buf));
+ X500Principal principal = cert.getSubjectX500Principal();
+ ks.setCertificateEntry(principal.getName("RFC2253"), cert);
+ }
+ } finally {
+ for (ByteBuf buf: certs) {
+ buf.release();
+ }
+ }
+
+ // Set up trust manager factory to use our key store.
+ if (trustManagerFactory == null) {
+ trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
+ }
+ trustManagerFactory.init(ks);
+
+ return trustManagerFactory;
+ }
}
| diff --git a/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java
index 4df0fc00a8d..9482f2b4450 100644
--- a/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java
+++ b/handler/src/test/java/io/netty/handler/ssl/JdkSslEngineTest.java
@@ -15,23 +15,17 @@
*/
package io.netty.handler.ssl;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
import static org.junit.Assume.assumeNoException;
-import static org.mockito.Mockito.verify;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
-import io.netty.buffer.ByteBuf;
-import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
-import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
@@ -40,94 +34,24 @@
import io.netty.handler.ssl.util.InsecureTrustManagerFactory;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import io.netty.util.NetUtil;
-import io.netty.util.concurrent.Future;
-import java.io.File;
import java.net.InetSocketAddress;
import java.security.cert.CertificateException;
import java.util.List;
import java.util.Set;
-import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import javax.net.ssl.SSLEngine;
import javax.net.ssl.SSLException;
import javax.net.ssl.SSLHandshakeException;
-import org.junit.After;
-import org.junit.Before;
import org.junit.Test;
-import org.mockito.ArgumentCaptor;
-import org.mockito.Mock;
-import org.mockito.MockitoAnnotations;
-public class JdkSslEngineTest {
+public class JdkSslEngineTest extends SSLEngineTest {
private static final String PREFERRED_APPLICATION_LEVEL_PROTOCOL = "my-protocol-http2";
private static final String FALLBACK_APPLICATION_LEVEL_PROTOCOL = "my-protocol-http1_1";
private static final String APPLICATION_LEVEL_PROTOCOL_NOT_COMPATIBLE = "my-protocol-FOO";
- @Mock
- private MessageReciever serverReceiver;
- @Mock
- private MessageReciever clientReceiver;
-
- private Throwable serverException;
- private Throwable clientException;
- private SslContext serverSslCtx;
- private SslContext clientSslCtx;
- private ServerBootstrap sb;
- private Bootstrap cb;
- private Channel serverChannel;
- private Channel serverConnectedChannel;
- private Channel clientChannel;
- private CountDownLatch serverLatch;
- private CountDownLatch clientLatch;
-
- private interface MessageReciever {
- void messageReceived(ByteBuf msg);
- }
-
- private final class MessageDelegatorChannelHandler extends SimpleChannelInboundHandler<ByteBuf> {
- private final MessageReciever receiver;
- private final CountDownLatch latch;
-
- public MessageDelegatorChannelHandler(MessageReciever receiver, CountDownLatch latch) {
- super(false);
- this.receiver = receiver;
- this.latch = latch;
- }
-
- @Override
- protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception {
- receiver.messageReceived(msg);
- latch.countDown();
- }
- }
-
- @Before
- public void setup() {
- MockitoAnnotations.initMocks(this);
- serverLatch = new CountDownLatch(1);
- clientLatch = new CountDownLatch(1);
- }
-
- @After
- public void tearDown() throws InterruptedException {
- if (serverChannel != null) {
- serverChannel.close().sync();
- Future<?> serverGroup = sb.group().shutdownGracefully(0, 0, TimeUnit.MILLISECONDS);
- Future<?> serverChildGroup = sb.childGroup().shutdownGracefully(0, 0, TimeUnit.MILLISECONDS);
- Future<?> clientGroup = cb.group().shutdownGracefully(0, 0, TimeUnit.MILLISECONDS);
- serverGroup.sync();
- serverChildGroup.sync();
- clientGroup.sync();
- }
- clientChannel = null;
- serverChannel = null;
- serverConnectedChannel = null;
- serverException = null;
- }
-
@Test
public void testNpn() throws Exception {
try {
@@ -344,57 +268,6 @@ public String select(List<String> protocols) {
}
}
- @Test
- public void testMutualAuthSameCerts() throws Exception {
- mySetupMutualAuth(new File(getClass().getResource("test_unencrypted.pem").getFile()),
- new File(getClass().getResource("test.crt").getFile()),
- null);
- runTest(null);
- }
-
- @Test
- public void testMutualAuthDiffCerts() throws Exception {
- File serverKeyFile = new File(getClass().getResource("test_encrypted.pem").getFile());
- File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
- String serverKeyPassword = "12345";
- File clientKeyFile = new File(getClass().getResource("test2_encrypted.pem").getFile());
- File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
- String clientKeyPassword = "12345";
- mySetupMutualAuth(clientCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
- serverCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
- runTest(null);
- }
-
- @Test
- public void testMutualAuthDiffCertsServerFailure() throws Exception {
- File serverKeyFile = new File(getClass().getResource("test_encrypted.pem").getFile());
- File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
- String serverKeyPassword = "12345";
- File clientKeyFile = new File(getClass().getResource("test2_encrypted.pem").getFile());
- File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
- String clientKeyPassword = "12345";
- // Client trusts server but server only trusts itself
- mySetupMutualAuth(serverCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
- serverCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
- assertTrue(serverLatch.await(2, TimeUnit.SECONDS));
- assertTrue(serverException instanceof SSLHandshakeException);
- }
-
- @Test
- public void testMutualAuthDiffCertsClientFailure() throws Exception {
- File serverKeyFile = new File(getClass().getResource("test_unencrypted.pem").getFile());
- File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
- String serverKeyPassword = null;
- File clientKeyFile = new File(getClass().getResource("test2_unencrypted.pem").getFile());
- File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
- String clientKeyPassword = null;
- // Server trusts client but client only trusts itself
- mySetupMutualAuth(clientCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
- clientCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
- assertTrue(clientLatch.await(2, TimeUnit.SECONDS));
- assertTrue(clientException instanceof SSLHandshakeException);
- }
-
private void mySetup(JdkApplicationProtocolNegotiator apn) throws InterruptedException, SSLException,
CertificateException {
mySetup(apn, apn);
@@ -465,132 +338,12 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E
clientChannel = ccf.channel();
}
- private void mySetupMutualAuth(File keyFile, File crtFile, String keyPassword)
- throws SSLException, CertificateException, InterruptedException {
- mySetupMutualAuth(crtFile, keyFile, crtFile, keyPassword, crtFile, keyFile, crtFile, keyPassword);
- }
-
- private void mySetupMutualAuth(
- File servertTrustCrtFile, File serverKeyFile, File serverCrtFile, String serverKeyPassword,
- File clientTrustCrtFile, File clientKeyFile, File clientCrtFile, String clientKeyPassword)
- throws InterruptedException, SSLException, CertificateException {
- serverSslCtx = new JdkSslServerContext(servertTrustCrtFile, null,
- serverCrtFile, serverKeyFile, serverKeyPassword, null,
- null, IdentityCipherSuiteFilter.INSTANCE, (ApplicationProtocolConfig) null, 0, 0);
- clientSslCtx = new JdkSslClientContext(clientTrustCrtFile, null,
- clientCrtFile, clientKeyFile, clientKeyPassword, null,
- null, IdentityCipherSuiteFilter.INSTANCE, (ApplicationProtocolConfig) null, 0, 0);
-
- serverConnectedChannel = null;
- sb = new ServerBootstrap();
- cb = new Bootstrap();
-
- sb.group(new NioEventLoopGroup(), new NioEventLoopGroup());
- sb.channel(NioServerSocketChannel.class);
- sb.childHandler(new ChannelInitializer<Channel>() {
- @Override
- protected void initChannel(Channel ch) throws Exception {
- ChannelPipeline p = ch.pipeline();
- SSLEngine engine = serverSslCtx.newEngine(ch.alloc());
- engine.setUseClientMode(false);
- engine.setNeedClientAuth(true);
- p.addLast(new SslHandler(engine));
- p.addLast(new MessageDelegatorChannelHandler(serverReceiver, serverLatch));
- p.addLast(new ChannelHandlerAdapter() {
- @Override
- public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
- if (cause.getCause() instanceof SSLHandshakeException) {
- serverException = cause.getCause();
- serverLatch.countDown();
- } else {
- ctx.fireExceptionCaught(cause);
- }
- }
- });
- serverConnectedChannel = ch;
- }
- });
-
- cb.group(new NioEventLoopGroup());
- cb.channel(NioSocketChannel.class);
- cb.handler(new ChannelInitializer<Channel>() {
- @Override
- protected void initChannel(Channel ch) throws Exception {
- ChannelPipeline p = ch.pipeline();
- p.addLast(clientSslCtx.newHandler(ch.alloc()));
- p.addLast(new MessageDelegatorChannelHandler(clientReceiver, clientLatch));
- p.addLast(new ChannelHandlerAdapter() {
- @Override
- public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
- if (cause.getCause() instanceof SSLHandshakeException) {
- clientException = cause.getCause();
- clientLatch.countDown();
- } else {
- ctx.fireExceptionCaught(cause);
- }
- }
- });
- }
- });
-
- serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel();
- int port = ((InetSocketAddress) serverChannel.localAddress()).getPort();
-
- ChannelFuture ccf = cb.connect(new InetSocketAddress(NetUtil.LOCALHOST, port));
- assertTrue(ccf.awaitUninterruptibly().isSuccess());
- clientChannel = ccf.channel();
- }
-
private void runTest() throws Exception {
runTest(PREFERRED_APPLICATION_LEVEL_PROTOCOL);
}
- private void runTest(String expectedApplicationProtocol) throws Exception {
- final ByteBuf clientMessage = Unpooled.copiedBuffer("I am a client".getBytes());
- final ByteBuf serverMessage = Unpooled.copiedBuffer("I am a server".getBytes());
- try {
- writeAndVerifyReceived(clientMessage.retain(), clientChannel, serverLatch, serverReceiver);
- writeAndVerifyReceived(serverMessage.retain(), serverConnectedChannel, clientLatch, clientReceiver);
- if (expectedApplicationProtocol != null) {
- verifyApplicationLevelProtocol(clientChannel, expectedApplicationProtocol);
- verifyApplicationLevelProtocol(serverConnectedChannel, expectedApplicationProtocol);
- }
- } finally {
- clientMessage.release();
- serverMessage.release();
- }
- }
-
- private void verifyApplicationLevelProtocol(Channel channel, String expectedApplicationProtocol) {
- SslHandler handler = channel.pipeline().get(SslHandler.class);
- assertNotNull(handler);
- String[] protocol = handler.engine().getSession().getProtocol().split(":");
- assertNotNull(protocol);
- if (expectedApplicationProtocol != null && !expectedApplicationProtocol.isEmpty()) {
- assertTrue("protocol.length must be greater than 1 but is " + protocol.length, protocol.length > 1);
- assertEquals(expectedApplicationProtocol, protocol[1]);
- } else {
- assertEquals(1, protocol.length);
- }
- }
-
- private static void writeAndVerifyReceived(ByteBuf message, Channel sendChannel, CountDownLatch receiverLatch,
- MessageReciever receiver) throws Exception {
- List<ByteBuf> dataCapture = null;
- try {
- sendChannel.writeAndFlush(message);
- receiverLatch.await(5, TimeUnit.SECONDS);
- message.resetReaderIndex();
- ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
- verify(receiver).messageReceived(captor.capture());
- dataCapture = captor.getAllValues();
- assertEquals(message, dataCapture.get(0));
- } finally {
- if (dataCapture != null) {
- for (ByteBuf data : dataCapture) {
- data.release();
- }
- }
- }
+ @Override
+ protected SslProvider sslProvider() {
+ return SslProvider.JDK;
}
}
diff --git a/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java
new file mode 100644
index 00000000000..0d6a76ca598
--- /dev/null
+++ b/handler/src/test/java/io/netty/handler/ssl/OpenSslEngineTest.java
@@ -0,0 +1,23 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.ssl;
+
+public class OpenSslEngineTest extends SSLEngineTest {
+ @Override
+ protected SslProvider sslProvider() {
+ return SslProvider.OPENSSL;
+ }
+}
diff --git a/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java
new file mode 100644
index 00000000000..490f3db0c0d
--- /dev/null
+++ b/handler/src/test/java/io/netty/handler/ssl/SSLEngineTest.java
@@ -0,0 +1,296 @@
+/*
+ * Copyright 2015 The Netty Project
+ *
+ * The Netty Project licenses this file to you under the Apache License,
+ * version 2.0 (the "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at:
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package io.netty.handler.ssl;
+
+import io.netty.bootstrap.Bootstrap;
+import io.netty.bootstrap.ServerBootstrap;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelHandlerAdapter;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelInitializer;
+import io.netty.channel.ChannelPipeline;
+import io.netty.channel.SimpleChannelInboundHandler;
+import io.netty.channel.nio.NioEventLoopGroup;
+import io.netty.channel.socket.nio.NioServerSocketChannel;
+import io.netty.channel.socket.nio.NioSocketChannel;
+import io.netty.util.NetUtil;
+import io.netty.util.concurrent.Future;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.ArgumentCaptor;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+
+import javax.net.ssl.SSLEngine;
+import javax.net.ssl.SSLException;
+import javax.net.ssl.SSLHandshakeException;
+import java.io.File;
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+
+import static org.junit.Assert.*;
+import static org.mockito.Mockito.verify;
+
+public abstract class SSLEngineTest {
+
+ @Mock
+ protected MessageReciever serverReceiver;
+ @Mock
+ protected MessageReciever clientReceiver;
+
+ protected Throwable serverException;
+ protected Throwable clientException;
+ protected SslContext serverSslCtx;
+ protected SslContext clientSslCtx;
+ protected ServerBootstrap sb;
+ protected Bootstrap cb;
+ protected Channel serverChannel;
+ protected Channel serverConnectedChannel;
+ protected Channel clientChannel;
+ protected CountDownLatch serverLatch;
+ protected CountDownLatch clientLatch;
+
+ interface MessageReciever {
+ void messageReceived(ByteBuf msg);
+ }
+
+ protected static final class MessageDelegatorChannelHandler extends SimpleChannelInboundHandler<ByteBuf> {
+ private final MessageReciever receiver;
+ private final CountDownLatch latch;
+
+ public MessageDelegatorChannelHandler(MessageReciever receiver, CountDownLatch latch) {
+ super(false);
+ this.receiver = receiver;
+ this.latch = latch;
+ }
+
+ @Override
+ protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception {
+ receiver.messageReceived(msg);
+ latch.countDown();
+ }
+ }
+
+ @Before
+ public void setup() {
+ MockitoAnnotations.initMocks(this);
+ serverLatch = new CountDownLatch(1);
+ clientLatch = new CountDownLatch(1);
+ }
+
+ @After
+ public void tearDown() throws InterruptedException {
+ if (serverChannel != null) {
+ serverChannel.close().sync();
+ Future<?> serverGroup = sb.group().shutdownGracefully(0, 0, TimeUnit.MILLISECONDS);
+ Future<?> serverChildGroup = sb.childGroup().shutdownGracefully(0, 0, TimeUnit.MILLISECONDS);
+ Future<?> clientGroup = cb.group().shutdownGracefully(0, 0, TimeUnit.MILLISECONDS);
+ serverGroup.sync();
+ serverChildGroup.sync();
+ clientGroup.sync();
+ }
+ clientChannel = null;
+ serverChannel = null;
+ serverConnectedChannel = null;
+ serverException = null;
+ }
+
+ @Test
+ public void testMutualAuthSameCerts() throws Exception {
+ mySetupMutualAuth(new File(getClass().getResource("test_unencrypted.pem").getFile()),
+ new File(getClass().getResource("test.crt").getFile()),
+ null);
+ runTest(null);
+ }
+
+ @Test
+ public void testMutualAuthDiffCerts() throws Exception {
+ File serverKeyFile = new File(getClass().getResource("test_encrypted.pem").getFile());
+ File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
+ String serverKeyPassword = "12345";
+ File clientKeyFile = new File(getClass().getResource("test2_encrypted.pem").getFile());
+ File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
+ String clientKeyPassword = "12345";
+ mySetupMutualAuth(clientCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
+ serverCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
+ runTest(null);
+ }
+
+ @Test
+ public void testMutualAuthDiffCertsServerFailure() throws Exception {
+ File serverKeyFile = new File(getClass().getResource("test_encrypted.pem").getFile());
+ File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
+ String serverKeyPassword = "12345";
+ File clientKeyFile = new File(getClass().getResource("test2_encrypted.pem").getFile());
+ File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
+ String clientKeyPassword = "12345";
+ // Client trusts server but server only trusts itself
+ mySetupMutualAuth(serverCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
+ serverCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
+ assertTrue(serverLatch.await(2, TimeUnit.SECONDS));
+ assertTrue(serverException instanceof SSLHandshakeException);
+ }
+
+ @Test
+ public void testMutualAuthDiffCertsClientFailure() throws Exception {
+ File serverKeyFile = new File(getClass().getResource("test_unencrypted.pem").getFile());
+ File serverCrtFile = new File(getClass().getResource("test.crt").getFile());
+ String serverKeyPassword = null;
+ File clientKeyFile = new File(getClass().getResource("test2_unencrypted.pem").getFile());
+ File clientCrtFile = new File(getClass().getResource("test2.crt").getFile());
+ String clientKeyPassword = null;
+ // Server trusts client but client only trusts itself
+ mySetupMutualAuth(clientCrtFile, serverKeyFile, serverCrtFile, serverKeyPassword,
+ clientCrtFile, clientKeyFile, clientCrtFile, clientKeyPassword);
+ assertTrue(clientLatch.await(2, TimeUnit.SECONDS));
+ assertTrue(clientException instanceof SSLHandshakeException);
+ }
+
+ private void mySetupMutualAuth(File keyFile, File crtFile, String keyPassword)
+ throws SSLException, InterruptedException {
+ mySetupMutualAuth(crtFile, keyFile, crtFile, keyPassword, crtFile, keyFile, crtFile, keyPassword);
+ }
+
+ private void mySetupMutualAuth(
+ File servertTrustCrtFile, File serverKeyFile, File serverCrtFile, String serverKeyPassword,
+ File clientTrustCrtFile, File clientKeyFile, File clientCrtFile, String clientKeyPassword)
+ throws InterruptedException, SSLException {
+ serverSslCtx = SslContext.newServerContext(sslProvider(), servertTrustCrtFile, null,
+ serverCrtFile, serverKeyFile, serverKeyPassword, null,
+ null, IdentityCipherSuiteFilter.INSTANCE, null, 0, 0);
+ clientSslCtx = SslContext.newClientContext(sslProvider(), clientTrustCrtFile, null,
+ clientCrtFile, clientKeyFile, clientKeyPassword, null,
+ null, IdentityCipherSuiteFilter.INSTANCE,
+ null, 0, 0);
+
+ serverConnectedChannel = null;
+ sb = new ServerBootstrap();
+ cb = new Bootstrap();
+
+ sb.group(new NioEventLoopGroup(), new NioEventLoopGroup());
+ sb.channel(NioServerSocketChannel.class);
+ sb.childHandler(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ChannelPipeline p = ch.pipeline();
+ SSLEngine engine = serverSslCtx.newEngine(ch.alloc());
+ engine.setUseClientMode(false);
+ engine.setNeedClientAuth(true);
+ p.addLast(new SslHandler(engine));
+ p.addLast(new MessageDelegatorChannelHandler(serverReceiver, serverLatch));
+ p.addLast(new ChannelHandlerAdapter() {
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ if (cause.getCause() instanceof SSLHandshakeException) {
+ serverException = cause.getCause();
+ serverLatch.countDown();
+ } else {
+ ctx.fireExceptionCaught(cause);
+ }
+ }
+ });
+ serverConnectedChannel = ch;
+ }
+ });
+
+ cb.group(new NioEventLoopGroup());
+ cb.channel(NioSocketChannel.class);
+ cb.handler(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ ChannelPipeline p = ch.pipeline();
+ p.addLast(clientSslCtx.newHandler(ch.alloc()));
+ p.addLast(new MessageDelegatorChannelHandler(clientReceiver, clientLatch));
+ p.addLast(new ChannelHandlerAdapter() {
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ cause.printStackTrace();
+ if (cause.getCause() instanceof SSLHandshakeException) {
+ clientException = cause.getCause();
+ clientLatch.countDown();
+ } else {
+ ctx.fireExceptionCaught(cause);
+ }
+ }
+ });
+ }
+ });
+
+ serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel();
+ int port = ((InetSocketAddress) serverChannel.localAddress()).getPort();
+
+ ChannelFuture ccf = cb.connect(new InetSocketAddress(NetUtil.LOCALHOST, port));
+ assertTrue(ccf.awaitUninterruptibly().isSuccess());
+ clientChannel = ccf.channel();
+ }
+
+ protected void runTest(String expectedApplicationProtocol) throws Exception {
+ final ByteBuf clientMessage = Unpooled.copiedBuffer("I am a client".getBytes());
+ final ByteBuf serverMessage = Unpooled.copiedBuffer("I am a server".getBytes());
+ try {
+ writeAndVerifyReceived(clientMessage.retain(), clientChannel, serverLatch, serverReceiver);
+ writeAndVerifyReceived(serverMessage.retain(), serverConnectedChannel, clientLatch, clientReceiver);
+ if (expectedApplicationProtocol != null) {
+ verifyApplicationLevelProtocol(clientChannel, expectedApplicationProtocol);
+ verifyApplicationLevelProtocol(serverConnectedChannel, expectedApplicationProtocol);
+ }
+ } finally {
+ clientMessage.release();
+ serverMessage.release();
+ }
+ }
+
+ private static void verifyApplicationLevelProtocol(Channel channel, String expectedApplicationProtocol) {
+ SslHandler handler = channel.pipeline().get(SslHandler.class);
+ assertNotNull(handler);
+ String[] protocol = handler.engine().getSession().getProtocol().split(":");
+ assertNotNull(protocol);
+ if (expectedApplicationProtocol != null && !expectedApplicationProtocol.isEmpty()) {
+ assertTrue("protocol.length must be greater than 1 but is " + protocol.length, protocol.length > 1);
+ assertEquals(expectedApplicationProtocol, protocol[1]);
+ } else {
+ assertEquals(1, protocol.length);
+ }
+ }
+
+ private static void writeAndVerifyReceived(ByteBuf message, Channel sendChannel, CountDownLatch receiverLatch,
+ MessageReciever receiver) throws Exception {
+ List<ByteBuf> dataCapture = null;
+ try {
+ sendChannel.writeAndFlush(message);
+ receiverLatch.await(5, TimeUnit.SECONDS);
+ message.resetReaderIndex();
+ ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
+ verify(receiver).messageReceived(captor.capture());
+ dataCapture = captor.getAllValues();
+ assertEquals(message, dataCapture.get(0));
+ } finally {
+ if (dataCapture != null) {
+ for (ByteBuf data : dataCapture) {
+ data.release();
+ }
+ }
+ }
+ }
+
+ protected abstract SslProvider sslProvider();
+}
| val | train | 2015-05-06T08:44:56 | 2015-04-29T09:59:02Z | doom369 | val |
netty/netty/3738_3741 | netty/netty | netty/netty/3738 | netty/netty/3741 | [
"timestamp(timedelta=2831.0, similarity=0.9376268990082014)"
] | 2ab9d659c77a4500123a30efce76a1fd7d669d26 | bbdb7395633b60a5ec0492400b34a296e84cbbb2 | [
"We love PR's ;)\n\n> Am 06.05.2015 um 03:40 schrieb Thomas notifications@github.com:\n> \n> in line 28,it writes '\\* A simple list which is reyclable.' ,i guess here should be recyclable. :p\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"thanks @normanmaurer i just pulled it.:)\n"
] | [] | 2015-05-06T05:54:36Z | [] | [typo]it seems to be a typo 'reyclable' in RecyclableArrayList | in line 28,it writes '\* A simple list which is reyclable.' ,i guess here should be recyclable. :p
| [
"common/src/main/java/io/netty/util/internal/RecyclableArrayList.java"
] | [
"common/src/main/java/io/netty/util/internal/RecyclableArrayList.java"
] | [] | diff --git a/common/src/main/java/io/netty/util/internal/RecyclableArrayList.java b/common/src/main/java/io/netty/util/internal/RecyclableArrayList.java
index 9a4214f9a6a..38847bd2790 100644
--- a/common/src/main/java/io/netty/util/internal/RecyclableArrayList.java
+++ b/common/src/main/java/io/netty/util/internal/RecyclableArrayList.java
@@ -25,7 +25,7 @@
import java.util.RandomAccess;
/**
- * A simple list which is reyclable. This implementation does not allow {@code null} elements to be added.
+ * A simple list which is recyclable. This implementation does not allow {@code null} elements to be added.
*/
public final class RecyclableArrayList extends ArrayList<Object> {
| null | val | train | 2015-05-06T06:53:56 | 2015-05-06T01:40:43Z | ThomasLau | val |
netty/netty/3775_3776 | netty/netty | netty/netty/3775 | netty/netty/3776 | [
"timestamp(timedelta=18.0, similarity=0.8593864041163389)"
] | d0c81604b6a3ed78961cda73ddb5ccb253202846 | a14f78a919680517a4c9c1a28cd0c181bfa2933a | [
"@ejona86 can you check ?\n",
"There should be a `SslContextBuilder.forServer(KeyManagerFactory)` but it is missing. Workaround is to provide any `File` (doesn't need to exist) to `forServer()` and then change the configuration.\n\n```\nSslContextBuilder.forServer(new File(\"foo\"), new File(\"foo\"))\n .keyMa... | [] | 2015-05-11T17:38:28Z | [
"defect"
] | SslContextBuilder has conflicting conditions in keyManager(...) | The `keyManager(File, ...)` and `keyManager(KeyManagerFactory)` methods have conflicting conditions for the server SslContextBuilder.
Or SslContextBuilder is missing an argument-less `forServer()` method.
```
SslContextBuilder.forServer(null, null)
.keyManager(myKeyManagerFactory)
.build();
```
```
java.lang.NullPointerException: keyCertChainFile required for servers
at io.netty.util.internal.ObjectUtil.checkNotNull(ObjectUtil.java:31)
at io.netty.handler.ssl.SslContextBuilder.keyManager(SslContextBuilder.java:129)
at io.netty.handler.ssl.SslContextBuilder.keyManager(SslContextBuilder.java:115)
at io.netty.handler.ssl.SslContextBuilder.forServer(SslContextBuilder.java:44)
```
| [
"handler/src/main/java/io/netty/handler/ssl/SslContext.java",
"handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/SslContext.java",
"handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/SslContext.java b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
index 890b36242bf..0865890f405 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslContext.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslContext.java
@@ -76,7 +76,7 @@
* <pre>
* // In your {@link ChannelInitializer}:
* {@link ChannelPipeline} p = channel.pipeline();
- * {@link SslContext} sslCtx = {@link #newBuilderForClient() SslContext.newBuilderForClient()}.build();
+ * {@link SslContext} sslCtx = {@link SslContextBuilder#forClient() SslContextBuilder.forClient()}.build();
* p.addLast("ssl", {@link #newEngine(ByteBufAllocator, String, int) sslCtx.newEngine(channel.alloc(), host, port)});
* ...
* </pre>
diff --git a/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java b/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java
index fcc840e6ca8..a0d3caf45d0 100644
--- a/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java
+++ b/handler/src/main/java/io/netty/handler/ssl/SslContextBuilder.java
@@ -39,6 +39,7 @@ public static SslContextBuilder forClient() {
*
* @param keyCertChainFile an X.509 certificate chain file in PEM format
* @param keyFile a PKCS#8 private key file in PEM format
+ * @see #keyManager(File, File)
*/
public static SslContextBuilder forServer(File keyCertChainFile, File keyFile) {
return new SslContextBuilder(true).keyManager(keyCertChainFile, keyFile);
@@ -51,12 +52,23 @@ public static SslContextBuilder forServer(File keyCertChainFile, File keyFile) {
* @param keyFile a PKCS#8 private key file in PEM format
* @param keyPassword the password of the {@code keyFile}, or {@code null} if it's not
* password-protected
+ * @see #keyManager(File, File, String)
*/
public static SslContextBuilder forServer(
File keyCertChainFile, File keyFile, String keyPassword) {
return new SslContextBuilder(true).keyManager(keyCertChainFile, keyFile, keyPassword);
}
+ /**
+ * Creates a builder for new server-side {@link SslContext}.
+ *
+ * @param keyManagerFactory non-{@code null} factory for server's private key
+ * @see #keyManager(KeyManagerFactory)
+ */
+ public static SslContextBuilder forServer(KeyManagerFactory keyManagerFactory) {
+ return new SslContextBuilder(true).keyManager(keyManagerFactory);
+ }
+
private final boolean forServer;
private SslProvider provider;
private File trustCertChainFile;
| null | train | train | 2015-05-11T06:16:40 | 2015-05-11T15:14:51Z | rkapsi | val |
netty/netty/3765_3793 | netty/netty | netty/netty/3765 | netty/netty/3793 | [
"timestamp(timedelta=16.0, similarity=0.8551911949346422)"
] | 833b92a5aaca287f0f9ebdc74d15c6f0a5baef74 | 5966411b1a81db9576f1d3bbc7cd0b3119726e2f | [
"@bobymicroby - Thanks for reaching out!\n\nWhat flags are you starting chrome with? Are you starting the example with the `-Dssl` flag? What version of netty are you using?\n",
"@Scottmitch it's @bobymicroby ;)\n",
"In order for the example to work with chrome you will have to change Netty's http2 ALPN token... | [] | 2015-05-15T18:18:09Z | [
"defect"
] | HTTP/2 examples doesn't work with Chrome 42.0.2311.135 (64-bit) on OSX 10.10.3 , Java 1.8.0_45-b14 , Netty 5.0.0.Alpha3-SNAPSHOT | The request stays as "pending" forever

The debug output:
May 08, 2015 6:45:27 PM io.netty.handler.logging.LoggingHandler channelRead
INFO: [id: 0x4d9163c8, /0:0:0:0:0:0:0:0:8080] RECEIVED: [id: 0x2e91accb, /127.0.0.1:64231 => /127.0.0.1:8080]
---
But the http2 client that is provided in the examples works flawlessly.
Hello World
---Stream id: 3 received---
Finished HTTP/2 request(s)
| [
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java",
"example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java",
"example/src/main/java/io/netty/example/http2/server/Http2ServerI... | [
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java",
"example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java",
"example/src/main/java/io/netty/example/http2/server/Http2ServerI... | [] | diff --git a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java
index 009e8fa2131..aa8f1d29494 100644
--- a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java
+++ b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java
@@ -15,7 +15,9 @@
*/
package io.netty.example.http2.server;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufUtil;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
@@ -24,7 +26,6 @@
import io.netty.handler.codec.http.HttpHeaderUtil;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpRequest;
-
import static io.netty.handler.codec.http.HttpHeaderNames.*;
import static io.netty.handler.codec.http.HttpResponseStatus.*;
import static io.netty.handler.codec.http.HttpVersion.*;
@@ -33,6 +34,11 @@
* HTTP handler that responds with a "Hello World"
*/
public class HelloWorldHttp1Handler extends SimpleChannelInboundHandler<HttpRequest> {
+ private final String establishApproach;
+
+ public HelloWorldHttp1Handler(String establishApproach) {
+ this.establishApproach = checkNotNull(establishApproach, "establishApproach");
+ }
@Override
public void messageReceived(ChannelHandlerContext ctx, HttpRequest req) throws Exception {
@@ -43,6 +49,7 @@ public void messageReceived(ChannelHandlerContext ctx, HttpRequest req) throws E
ByteBuf content = ctx.alloc().buffer();
content.writeBytes(HelloWorldHttp2Handler.RESPONSE_BYTES.duplicate());
+ ByteBufUtil.writeAscii(content, " - via " + req.protocolVersion() + " (" + establishApproach + ")");
FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, content);
response.headers().set(CONTENT_TYPE, "text/plain; charset=UTF-8");
diff --git a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
index 61fff9e4b91..f0ae73fd6e1 100644
--- a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
+++ b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
@@ -21,6 +21,7 @@
import static io.netty.handler.codec.http.HttpResponseStatus.OK;
import static io.netty.handler.logging.LogLevel.INFO;
import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufUtil;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.http.HttpServerUpgradeHandler;
import io.netty.handler.codec.http2.DefaultHttp2Connection;
@@ -112,7 +113,10 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId,
Http2Headers headers, int streamDependency, short weight,
boolean exclusive, int padding, boolean endStream) throws Http2Exception {
if (endStream) {
- sendResponse(ctx, streamId, RESPONSE_BYTES.duplicate());
+ ByteBuf content = ctx.alloc().buffer();
+ content.writeBytes(HelloWorldHttp2Handler.RESPONSE_BYTES.duplicate());
+ ByteBufUtil.writeAscii(content, " - via HTTP/2");
+ sendResponse(ctx, streamId, content);
}
}
diff --git a/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java b/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java
index bb8cae84f86..e07dec24868 100644
--- a/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java
+++ b/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java
@@ -48,7 +48,7 @@ protected SelectedProtocol getProtocol(SSLEngine engine) {
@Override
protected ChannelHandler createHttp1RequestHandler() {
- return new HelloWorldHttp1Handler();
+ return new HelloWorldHttp1Handler("ALPN Negotiation");
}
@Override
diff --git a/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java b/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java
index ac36a9819b4..cc5a7f31ada 100644
--- a/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java
+++ b/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java
@@ -19,7 +19,9 @@
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
+import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.channel.socket.SocketChannel;
+import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpServerCodec;
import io.netty.handler.codec.http.HttpServerUpgradeHandler;
import io.netty.handler.codec.http2.Http2ServerUpgradeCodec;
@@ -66,6 +68,16 @@ private static void configureClearText(SocketChannel ch) {
ch.pipeline().addLast(sourceCodec);
ch.pipeline().addLast(upgradeHandler);
+ ch.pipeline().addLast(new SimpleChannelInboundHandler<HttpMessage>() {
+ @Override
+ protected void messageReceived(ChannelHandlerContext ctx, HttpMessage msg) throws Exception {
+ // If this handler is hit then no upgrade has been attempted and the client is just talking HTTP.
+ System.err.println("Directly talking: " + msg.protocolVersion() + " (no upgrade was attempted)");
+ ctx.pipeline().replace(this, "http-hello-world",
+ new HelloWorldHttp1Handler("Direct. No Upgrade Attempted."));
+ ctx.fireChannelRead(msg);
+ }
+ });
ch.pipeline().addLast(new UserEventLogger());
}
| null | train | train | 2015-05-18T10:23:54 | 2015-05-08T15:48:32Z | bobymicroby | val |
netty/netty/3724_3793 | netty/netty | netty/netty/3724 | netty/netty/3793 | [
"timestamp(timedelta=104978.0, similarity=0.8452430653395315)"
] | 833b92a5aaca287f0f9ebdc74d15c6f0a5baef74 | 5966411b1a81db9576f1d3bbc7cd0b3119726e2f | [
"@nmittler @buchgr - I have a pending PR for this.\n",
"Fixed by https://github.com/netty/netty/pull/3733\n"
] | [] | 2015-05-15T18:18:09Z | [
"defect"
] | HTTP/2 headers with END_STREAM writes a RST_STREAM | If a headers frame is written with the END_STREAM flag sent we send a RST_STREAM frame. The remote flow controller should not generate a RST_STREAM in this case.
| [
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java",
"example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java",
"example/src/main/java/io/netty/example/http2/server/Http2ServerI... | [
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java",
"example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java",
"example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java",
"example/src/main/java/io/netty/example/http2/server/Http2ServerI... | [] | diff --git a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java
index 009e8fa2131..aa8f1d29494 100644
--- a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java
+++ b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp1Handler.java
@@ -15,7 +15,9 @@
*/
package io.netty.example.http2.server;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufUtil;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
@@ -24,7 +26,6 @@
import io.netty.handler.codec.http.HttpHeaderUtil;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpRequest;
-
import static io.netty.handler.codec.http.HttpHeaderNames.*;
import static io.netty.handler.codec.http.HttpResponseStatus.*;
import static io.netty.handler.codec.http.HttpVersion.*;
@@ -33,6 +34,11 @@
* HTTP handler that responds with a "Hello World"
*/
public class HelloWorldHttp1Handler extends SimpleChannelInboundHandler<HttpRequest> {
+ private final String establishApproach;
+
+ public HelloWorldHttp1Handler(String establishApproach) {
+ this.establishApproach = checkNotNull(establishApproach, "establishApproach");
+ }
@Override
public void messageReceived(ChannelHandlerContext ctx, HttpRequest req) throws Exception {
@@ -43,6 +49,7 @@ public void messageReceived(ChannelHandlerContext ctx, HttpRequest req) throws E
ByteBuf content = ctx.alloc().buffer();
content.writeBytes(HelloWorldHttp2Handler.RESPONSE_BYTES.duplicate());
+ ByteBufUtil.writeAscii(content, " - via " + req.protocolVersion() + " (" + establishApproach + ")");
FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, content);
response.headers().set(CONTENT_TYPE, "text/plain; charset=UTF-8");
diff --git a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
index 61fff9e4b91..f0ae73fd6e1 100644
--- a/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
+++ b/example/src/main/java/io/netty/example/http2/server/HelloWorldHttp2Handler.java
@@ -21,6 +21,7 @@
import static io.netty.handler.codec.http.HttpResponseStatus.OK;
import static io.netty.handler.logging.LogLevel.INFO;
import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufUtil;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.http.HttpServerUpgradeHandler;
import io.netty.handler.codec.http2.DefaultHttp2Connection;
@@ -112,7 +113,10 @@ public void onHeadersRead(ChannelHandlerContext ctx, int streamId,
Http2Headers headers, int streamDependency, short weight,
boolean exclusive, int padding, boolean endStream) throws Http2Exception {
if (endStream) {
- sendResponse(ctx, streamId, RESPONSE_BYTES.duplicate());
+ ByteBuf content = ctx.alloc().buffer();
+ content.writeBytes(HelloWorldHttp2Handler.RESPONSE_BYTES.duplicate());
+ ByteBufUtil.writeAscii(content, " - via HTTP/2");
+ sendResponse(ctx, streamId, content);
}
}
diff --git a/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java b/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java
index bb8cae84f86..e07dec24868 100644
--- a/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java
+++ b/example/src/main/java/io/netty/example/http2/server/Http2OrHttpHandler.java
@@ -48,7 +48,7 @@ protected SelectedProtocol getProtocol(SSLEngine engine) {
@Override
protected ChannelHandler createHttp1RequestHandler() {
- return new HelloWorldHttp1Handler();
+ return new HelloWorldHttp1Handler("ALPN Negotiation");
}
@Override
diff --git a/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java b/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java
index ac36a9819b4..cc5a7f31ada 100644
--- a/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java
+++ b/example/src/main/java/io/netty/example/http2/server/Http2ServerInitializer.java
@@ -19,7 +19,9 @@
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
+import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.channel.socket.SocketChannel;
+import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpServerCodec;
import io.netty.handler.codec.http.HttpServerUpgradeHandler;
import io.netty.handler.codec.http2.Http2ServerUpgradeCodec;
@@ -66,6 +68,16 @@ private static void configureClearText(SocketChannel ch) {
ch.pipeline().addLast(sourceCodec);
ch.pipeline().addLast(upgradeHandler);
+ ch.pipeline().addLast(new SimpleChannelInboundHandler<HttpMessage>() {
+ @Override
+ protected void messageReceived(ChannelHandlerContext ctx, HttpMessage msg) throws Exception {
+ // If this handler is hit then no upgrade has been attempted and the client is just talking HTTP.
+ System.err.println("Directly talking: " + msg.protocolVersion() + " (no upgrade was attempted)");
+ ctx.pipeline().replace(this, "http-hello-world",
+ new HelloWorldHttp1Handler("Direct. No Upgrade Attempted."));
+ ctx.fireChannelRead(msg);
+ }
+ });
ch.pipeline().addLast(new UserEventLogger());
}
| null | train | train | 2015-05-18T10:23:54 | 2015-05-04T22:21:01Z | Scottmitch | val |
netty/netty/3815_3873 | netty/netty | netty/netty/3815 | netty/netty/3873 | [
"timestamp(timedelta=41.0, similarity=0.9999999999999998)"
] | e72d04509f7d04d2a0a0e96cc5555728ac03aa29 | 88b5d162226c1c87de563a2790c922d86855d20b | [
"/cc @nmittler @Scottmitch \n",
"@trustin makes sense.\n",
"@trustin SGTM\n",
"+1\n",
"@trustin - Are you taking the assignment on this one?\n",
"I'm on a short travel right now, but will take care of the 3\nHTTP/2-related issues I filed once I'm back on Monday.\n",
"Thanks @trustin\n\n> Am 23.05.2015 ... | [
"nit: consider just inlining: `ch.pipeline().addLast(...)`;\n"
] | 2015-06-09T06:10:34Z | [
"improvement"
] | Do not use hard-coded handler names in HTTP/2 | Our HTTP/2 implementation sometimes uses hard-coded handler names when adding/removing a handler to/from a pipeline. It's not really a good idea because it can easily result in name clashes.
Unless there is a good reason, we need to use the reference to the handlers:
``` java
pipeline.addLast(handler1); // No name specified. Netty will generate one from its name cache for you.
pipeline.remove(handler1);
...
```
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java",
"example/src/main/java/io/netty/example/http2/helloworld/client/Http2ClientInitializer.java",
"example/src/main/java/io/netty/example/ht... | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java",
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java",
"example/src/main/java/io/netty/example/http2/helloworld/client/Http2ClientInitializer.java",
"example/src/main/java/io/netty/example/ht... | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java",
"codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java
index 9476e59161b..0cdb65b9416 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ClientUpgradeCodec.java
@@ -14,16 +14,6 @@
*/
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.base64.Base64Dialect.URL_SAFE;
-import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_PROTOCOL_NAME;
-import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_SETTINGS_HEADER;
-import static io.netty.handler.codec.http2.Http2CodecUtil.SETTING_ENTRY_LENGTH;
-import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedInt;
-import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedShort;
-import static io.netty.util.CharsetUtil.UTF_8;
-import static io.netty.util.ReferenceCountUtil.release;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
-
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.base64.Base64;
@@ -36,6 +26,16 @@
import java.util.Collections;
import java.util.List;
+import static io.netty.handler.codec.base64.Base64Dialect.URL_SAFE;
+import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_PROTOCOL_NAME;
+import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_SETTINGS_HEADER;
+import static io.netty.handler.codec.http2.Http2CodecUtil.SETTING_ENTRY_LENGTH;
+import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedInt;
+import static io.netty.handler.codec.http2.Http2CodecUtil.writeUnsignedShort;
+import static io.netty.util.CharsetUtil.UTF_8;
+import static io.netty.util.ReferenceCountUtil.release;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
/**
* Client-side cleartext upgrade codec from HTTP to HTTP/2.
*/
@@ -50,21 +50,21 @@ public class Http2ClientUpgradeCodec implements HttpClientUpgradeHandler.Upgrade
* Creates the codec using a default name for the connection handler when adding to the
* pipeline.
*
- * @param connectionHandler the HTTP/2 connection handler.
+ * @param connectionHandler the HTTP/2 connection handler
*/
public Http2ClientUpgradeCodec(Http2ConnectionHandler connectionHandler) {
- this("http2ConnectionHandler", connectionHandler);
+ this(null, connectionHandler);
}
/**
* Creates the codec providing an upgrade to the given handler for HTTP/2.
*
- * @param handlerName the name of the HTTP/2 connection handler to be used in the pipeline.
- * @param connectionHandler the HTTP/2 connection handler.
+ * @param handlerName the name of the HTTP/2 connection handler to be used in the pipeline,
+ * or {@code null} to auto-generate the name
+ * @param connectionHandler the HTTP/2 connection handler
*/
- public Http2ClientUpgradeCodec(String handlerName,
- Http2ConnectionHandler connectionHandler) {
- this.handlerName = checkNotNull(handlerName, "handlerName");
+ public Http2ClientUpgradeCodec(String handlerName, Http2ConnectionHandler connectionHandler) {
+ this.handlerName = handlerName;
this.connectionHandler = checkNotNull(connectionHandler, "connectionHandler");
}
diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
index aaac9e83d47..0c04e9b9930 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ServerUpgradeCodec.java
@@ -14,14 +14,6 @@
*/
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.base64.Base64Dialect.URL_SAFE;
-import static io.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST;
-import static io.netty.handler.codec.http2.Http2CodecUtil.FRAME_HEADER_LENGTH;
-import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_PROTOCOL_NAME;
-import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_SETTINGS_HEADER;
-import static io.netty.handler.codec.http2.Http2CodecUtil.writeFrameHeader;
-import static io.netty.handler.codec.http2.Http2FrameTypes.SETTINGS;
-import static io.netty.util.internal.ObjectUtil.checkNotNull;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufUtil;
import io.netty.channel.ChannelHandlerContext;
@@ -36,6 +28,15 @@
import java.util.Collections;
import java.util.List;
+import static io.netty.handler.codec.base64.Base64Dialect.URL_SAFE;
+import static io.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST;
+import static io.netty.handler.codec.http2.Http2CodecUtil.FRAME_HEADER_LENGTH;
+import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_PROTOCOL_NAME;
+import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_SETTINGS_HEADER;
+import static io.netty.handler.codec.http2.Http2CodecUtil.writeFrameHeader;
+import static io.netty.handler.codec.http2.Http2FrameTypes.SETTINGS;
+import static io.netty.util.internal.ObjectUtil.checkNotNull;
+
/**
* Server-side codec for performing a cleartext upgrade from HTTP/1.x to HTTP/2.
*/
@@ -52,20 +53,21 @@ public class Http2ServerUpgradeCodec implements HttpServerUpgradeHandler.Upgrade
* Creates the codec using a default name for the connection handler when adding to the
* pipeline.
*
- * @param connectionHandler the HTTP/2 connection handler.
+ * @param connectionHandler the HTTP/2 connection handler
*/
public Http2ServerUpgradeCodec(Http2ConnectionHandler connectionHandler) {
- this("http2ConnectionHandler", connectionHandler);
+ this(null, connectionHandler);
}
/**
* Creates the codec providing an upgrade to the given handler for HTTP/2.
*
- * @param handlerName the name of the HTTP/2 connection handler to be used in the pipeline.
- * @param connectionHandler the HTTP/2 connection handler.
+ * @param handlerName the name of the HTTP/2 connection handler to be used in the pipeline,
+ * or {@code null} to auto-generate the name
+ * @param connectionHandler the HTTP/2 connection handler
*/
public Http2ServerUpgradeCodec(String handlerName, Http2ConnectionHandler connectionHandler) {
- this.handlerName = checkNotNull(handlerName, "handlerName");
+ this.handlerName = handlerName;
this.connectionHandler = checkNotNull(connectionHandler, "connectionHandler");
frameReader = new DefaultHttp2FrameReader();
}
diff --git a/example/src/main/java/io/netty/example/http2/helloworld/client/Http2ClientInitializer.java b/example/src/main/java/io/netty/example/http2/helloworld/client/Http2ClientInitializer.java
index 2504966f103..ab58c58bae8 100644
--- a/example/src/main/java/io/netty/example/http2/helloworld/client/Http2ClientInitializer.java
+++ b/example/src/main/java/io/netty/example/http2/helloworld/client/Http2ClientInitializer.java
@@ -14,8 +14,6 @@
*/
package io.netty.example.http2.helloworld.client;
-import static io.netty.handler.logging.LogLevel.INFO;
-
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.channel.ChannelInitializer;
@@ -41,6 +39,8 @@
import io.netty.handler.codec.http2.InboundHttp2ToHttpAdapter;
import io.netty.handler.ssl.SslContext;
+import static io.netty.handler.logging.LogLevel.INFO;
+
/**
* Configures the client pipeline to support HTTP/2 frames.
*/
@@ -88,8 +88,7 @@ public Http2SettingsHandler settingsHandler() {
}
protected void configureEndOfPipeline(ChannelPipeline pipeline) {
- pipeline.addLast("Http2SettingsHandler", settingsHandler);
- pipeline.addLast("HttpResponseHandler", responseHandler);
+ pipeline.addLast(settingsHandler, responseHandler);
}
/**
@@ -97,8 +96,8 @@ protected void configureEndOfPipeline(ChannelPipeline pipeline) {
*/
private void configureSsl(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
- pipeline.addLast("SslHandler", sslCtx.newHandler(ch.alloc()));
- pipeline.addLast("Http2Handler", connectionHandler);
+ pipeline.addLast(sslCtx.newHandler(ch.alloc()),
+ connectionHandler);
configureEndOfPipeline(pipeline);
}
@@ -110,10 +109,10 @@ private void configureClearText(SocketChannel ch) {
Http2ClientUpgradeCodec upgradeCodec = new Http2ClientUpgradeCodec(connectionHandler);
HttpClientUpgradeHandler upgradeHandler = new HttpClientUpgradeHandler(sourceCodec, upgradeCodec, 65536);
- ch.pipeline().addLast("Http2SourceCodec", sourceCodec);
- ch.pipeline().addLast("Http2UpgradeHandler", upgradeHandler);
- ch.pipeline().addLast("Http2UpgradeRequestHandler", new UpgradeRequestHandler());
- ch.pipeline().addLast("Logger", new UserEventLogger());
+ ch.pipeline().addLast(sourceCodec,
+ upgradeHandler,
+ new UpgradeRequestHandler(),
+ new UserEventLogger());
}
/**
@@ -131,7 +130,7 @@ public void channelActive(ChannelHandlerContext ctx) throws Exception {
// Done with this handler, remove it from the pipeline.
ctx.pipeline().remove(this);
- Http2ClientInitializer.this.configureEndOfPipeline(ctx.pipeline());
+ configureEndOfPipeline(ctx.pipeline());
}
}
diff --git a/example/src/main/java/io/netty/example/http2/tiles/HttpServer.java b/example/src/main/java/io/netty/example/http2/tiles/HttpServer.java
index a961b4eded9..8b639cdc429 100644
--- a/example/src/main/java/io/netty/example/http2/tiles/HttpServer.java
+++ b/example/src/main/java/io/netty/example/http2/tiles/HttpServer.java
@@ -54,11 +54,10 @@ public ChannelFuture start() throws Exception {
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
- ChannelPipeline pipeline = ch.pipeline();
- pipeline.addLast("httpRequestDecoder", new HttpRequestDecoder());
- pipeline.addLast("httpResponseEncoder", new HttpResponseEncoder());
- pipeline.addLast("httpChunkAggregator", new HttpObjectAggregator(MAX_CONTENT_LENGTH));
- pipeline.addLast("httpRequestHandler", new Http1RequestHandler());
+ ch.pipeline().addLast(new HttpRequestDecoder(),
+ new HttpResponseEncoder(),
+ new HttpObjectAggregator(MAX_CONTENT_LENGTH),
+ new Http1RequestHandler());
}
});
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
index ba7abd961db..077d48ae4d5 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2FrameRoundtripTest.java
@@ -15,19 +15,6 @@
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.http2.Http2TestUtil.as;
-import static io.netty.handler.codec.http2.Http2TestUtil.randomString;
-import static io.netty.handler.codec.http2.Http2TestUtil.runInChannel;
-import static io.netty.util.CharsetUtil.UTF_8;
-import static java.util.concurrent.TimeUnit.SECONDS;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.mockito.Matchers.any;
-import static org.mockito.Matchers.anyInt;
-import static org.mockito.Matchers.eq;
-import static org.mockito.Mockito.doAnswer;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
@@ -44,6 +31,13 @@
import io.netty.handler.codec.http2.Http2TestUtil.Http2Runnable;
import io.netty.util.NetUtil;
import io.netty.util.concurrent.Future;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
import java.net.InetSocketAddress;
import java.util.ArrayList;
@@ -52,13 +46,19 @@
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import org.mockito.Mock;
-import org.mockito.MockitoAnnotations;
-import org.mockito.invocation.InvocationOnMock;
-import org.mockito.stubbing.Answer;
+import static io.netty.handler.codec.http2.Http2TestUtil.as;
+import static io.netty.handler.codec.http2.Http2TestUtil.randomString;
+import static io.netty.handler.codec.http2.Http2TestUtil.runInChannel;
+import static io.netty.util.CharsetUtil.UTF_8;
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.anyInt;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.doAnswer;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
/**
* Tests encoding/decoding each HTTP2 frame type.
@@ -362,7 +362,7 @@ private void bootstrapEnv(int requestCountDown) throws Exception {
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
serverAdapter = new Http2TestUtil.FrameAdapter(serverListener, requestLatch);
- p.addLast("reader", serverAdapter);
+ p.addLast(serverAdapter);
}
});
@@ -372,7 +372,7 @@ protected void initChannel(Channel ch) throws Exception {
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
- p.addLast("reader", new Http2TestUtil.FrameAdapter(null, null));
+ p.addLast(new Http2TestUtil.FrameAdapter(null, null));
}
});
diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
index 807721da6f8..10deac6d0cd 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/InboundHttp2ToHttpAdapterTest.java
@@ -14,17 +14,6 @@
*/
package io.netty.handler.codec.http2;
-import static io.netty.handler.codec.http2.Http2Exception.isStreamError;
-import static io.netty.handler.codec.http2.Http2CodecUtil.getEmbeddedHttp2Exception;
-import static io.netty.handler.codec.http2.Http2TestUtil.as;
-import static io.netty.handler.codec.http2.Http2TestUtil.runInChannel;
-import static java.util.concurrent.TimeUnit.MILLISECONDS;
-import static java.util.concurrent.TimeUnit.SECONDS;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.mockito.Mockito.reset;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
@@ -57,11 +46,6 @@
import io.netty.util.CharsetUtil;
import io.netty.util.NetUtil;
import io.netty.util.concurrent.Future;
-
-import java.net.InetSocketAddress;
-import java.util.List;
-import java.util.concurrent.CountDownLatch;
-
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -69,6 +53,22 @@
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+
+import static io.netty.handler.codec.http2.Http2CodecUtil.getEmbeddedHttp2Exception;
+import static io.netty.handler.codec.http2.Http2Exception.isStreamError;
+import static io.netty.handler.codec.http2.Http2TestUtil.as;
+import static io.netty.handler.codec.http2.Http2TestUtil.runInChannel;
+import static java.util.concurrent.TimeUnit.MILLISECONDS;
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.reset;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+
/**
* Testing the {@link InboundHttp2ToHttpPriorityAdapter} and base class {@link InboundHttp2ToHttpAdapter} for HTTP/2
* frames into {@link HttpObject}s
@@ -124,15 +124,16 @@ public void setup() throws Exception {
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
Http2Connection connection = new DefaultHttp2Connection(true);
- p.addLast(
- "reader",
- new HttpAdapterFrameAdapter(connection,
- new InboundHttp2ToHttpPriorityAdapter.Builder(connection)
- .maxContentLength(maxContentLength)
- .validateHttpHeaders(true)
- .propagateSettings(true)
- .build(),
- new CountDownLatch(10)));
+
+ p.addLast(new HttpAdapterFrameAdapter(
+ connection,
+ new InboundHttp2ToHttpPriorityAdapter.Builder(connection)
+ .maxContentLength(maxContentLength)
+ .validateHttpHeaders(true)
+ .propagateSettings(true)
+ .build(),
+ new CountDownLatch(10)));
+
serverDelegator = new HttpResponseDelegator(serverListener, serverLatch);
p.addLast(serverDelegator);
serverConnectedChannel = ch;
@@ -160,13 +161,14 @@ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws E
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
Http2Connection connection = new DefaultHttp2Connection(false);
- p.addLast(
- "reader",
- new HttpAdapterFrameAdapter(connection,
- new InboundHttp2ToHttpPriorityAdapter.Builder(connection)
+
+ p.addLast(new HttpAdapterFrameAdapter(
+ connection,
+ new InboundHttp2ToHttpPriorityAdapter.Builder(connection)
.maxContentLength(maxContentLength)
.build(),
- new CountDownLatch(10)));
+ new CountDownLatch(10)));
+
clientDelegator = new HttpResponseDelegator(clientListener, clientLatch);
p.addLast(clientDelegator);
}
| val | train | 2015-06-09T07:27:02 | 2015-05-21T08:13:53Z | trustin | val |
netty/netty/3883_3889 | netty/netty | netty/netty/3883 | netty/netty/3889 | [
"timestamp(timedelta=15.0, similarity=0.9305900770455292)"
] | f2796bae29ae729cb3f77e0a3e2711d6c1a184f4 | 60874e337df0d1ef6ab36e9a76d8f54bab950898 | [
"@edwinchoi thanks for the report... Will fix it\n",
"@edwinchoi can you verify the fix:\n\nhttps://github.com/netty/netty/pull/3889\n\nThanks!\n",
"@normanmaurer works as expected now, thanks!\n",
"Fixed\n"
] | [] | 2015-06-16T18:05:38Z | [
"defect"
] | OpenSSL SSLSession returns incorrect peer principal | According to the docs for `javax.net.ssl.SSLSession` (http://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLSession.html#getPeerPrincipal()), `getPeerPrincipal` should be returning the identity of the peer.
The OpenSSL based implementation for Netty, however, is returning the identity of the issuer:
https://github.com/netty/netty/blob/netty-4.0.28.Final/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java#L1406
`getPeerPrincipal()` calls `principal(certs)`, which casts to `X509Certificate` and calls `getIssuerX500Principal()` when it should be calling `getSubject500Principal()`.
The JDK based implementation returns the correct principal.
| [
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java"
] | [
"handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java"
] | [] | diff --git a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
index 53ef4334465..756c11fb163 100644
--- a/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
+++ b/handler/src/main/java/io/netty/handler/ssl/OpenSslEngine.java
@@ -1446,7 +1446,7 @@ public Principal getPeerPrincipal() throws SSLPeerUnverifiedException {
if (peer == null || peer.length == 0) {
return null;
}
- return principal(peer);
+ return ((java.security.cert.X509Certificate) peer[0]).getSubjectX500Principal();
}
@Override
@@ -1455,11 +1455,7 @@ public Principal getLocalPrincipal() {
if (local == null || local.length == 0) {
return null;
}
- return principal(local);
- }
-
- private Principal principal(Certificate[] certs) {
- return ((java.security.cert.X509Certificate) certs[0]).getIssuerX500Principal();
+ return ((java.security.cert.X509Certificate) local[0]).getIssuerX500Principal();
}
@Override
| null | train | train | 2015-06-12T21:42:44 | 2015-06-12T00:22:22Z | edwinchoi | val |
netty/netty/3881_3890 | netty/netty | netty/netty/3881 | netty/netty/3890 | [
"timestamp(timedelta=15.0, similarity=0.9466935819050754)"
] | bb17071ea0cb532ce01d2352b4cd001ecc356e96 | 476bcf57211b932aa89868e9c14fe40b46200e51 | [
"@fratboy - Thanks for reporting!\n\n@normanmaurer - Seems legit. WDYT?\n",
"@Scottmitch yep, will fix\n",
"Fixed\n",
":+1: \n"
] | [] | 2015-06-16T18:11:22Z | [
"defect"
] | FixedChannelPool creates 1 more channel than maxConnections | Netty version: 4.0.10.Beta5
Context:
1 more channel was created, when I tried to acquire channel via `FixedChannelPool.acquire`
Steps to reproduct:
1. Create a `FixedChannelPool` with `N` `maxConnections`
2. Call `FixedChannelPool.acquire` from enough many thread
3. `N+1` channels is created.
Opinion:
It looks like
https://github.com/netty/netty/blob/master/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java#L267
``` java
while (acquiredChannelCount <= maxConnections) {
```
should be
``` java
while (acquiredChannelCount < maxConnections) {
```
| [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java"
] | [
"transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java"
] | [] | diff --git a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
index 1066358dd60..ddafebb8d3a 100644
--- a/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
+++ b/transport/src/main/java/io/netty/channel/pool/FixedChannelPool.java
@@ -264,7 +264,7 @@ private void decrementAndRunTaskQueue() {
}
private void runTaskQueue() {
- while (acquiredChannelCount <= maxConnections) {
+ while (acquiredChannelCount < maxConnections) {
AcquireTask task = pendingAcquireQueue.poll();
if (task == null) {
break;
| null | train | train | 2015-06-16T20:10:08 | 2015-06-11T03:40:43Z | alexpark7712 | val |
netty/netty/3888_3891 | netty/netty | netty/netty/3888 | netty/netty/3891 | [
"timestamp(timedelta=117016.0, similarity=0.9135473161642038)"
] | 303cb535239a6f07cbe24a033ef965e2f55758eb | ddd7bdfb8f9d66923eb17e3a4bcdf9bd49ee45f8 | [
"@purplefox FYI\n",
"Fixed\n"
] | [] | 2015-06-16T18:28:09Z | [
"improvement"
] | Number of PoolArena's should be 2 * cores | We use 2 \* cores as the default number of EventLoop's for NIO and EPOLL, so we should do the same for the numbers of PoolArena's. This will reduce conditions.
jemalloc itself even use 4 \* cpus:
http://www.canonware.com/download/jemalloc/jemalloc-latest/doc/jemalloc.html
Related to:
https://groups.google.com/d/msg/netty/J78updTXPTU/T6-i_O6Hxu4J
| [
"buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java"
] | [
"buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java b/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java
index 1ea167883ff..148e6ec0d22 100644
--- a/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java
+++ b/buffer/src/main/java/io/netty/buffer/PooledByteBufAllocator.java
@@ -69,18 +69,24 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator {
// Determine reasonable default for nHeapArena and nDirectArena.
// Assuming each arena has 3 chunks, the pool should not consume more than 50% of max memory.
final Runtime runtime = Runtime.getRuntime();
+
+ // Use 2 * cores by default to reduce condition as we use 2 * cores for the number of EventLoops
+ // in NIO and EPOLL as well. If we choose a smaller number we will run into hotspots as allocation and
+ // deallocation needs to be synchronized on the PoolArena.
+ // See https://github.com/netty/netty/issues/3888
+ final int defaultMinNumArena = runtime.availableProcessors() * 2;
final int defaultChunkSize = DEFAULT_PAGE_SIZE << DEFAULT_MAX_ORDER;
DEFAULT_NUM_HEAP_ARENA = Math.max(0,
SystemPropertyUtil.getInt(
"io.netty.allocator.numHeapArenas",
(int) Math.min(
- runtime.availableProcessors(),
- Runtime.getRuntime().maxMemory() / defaultChunkSize / 2 / 3)));
+ defaultMinNumArena,
+ runtime.maxMemory() / defaultChunkSize / 2 / 3)));
DEFAULT_NUM_DIRECT_ARENA = Math.max(0,
SystemPropertyUtil.getInt(
"io.netty.allocator.numDirectArenas",
(int) Math.min(
- runtime.availableProcessors(),
+ defaultMinNumArena,
PlatformDependent.maxDirectMemory() / defaultChunkSize / 2 / 3)));
// cache sizes
@@ -202,7 +208,7 @@ public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectA
int tinyCacheSize, int smallCacheSize, int normalCacheSize,
long cacheThreadAliveCheckInterval) {
this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder,
- tinyCacheSize, smallCacheSize, normalCacheSize);
+ tinyCacheSize, smallCacheSize, normalCacheSize);
}
@SuppressWarnings("unchecked")
| null | val | train | 2015-06-17T06:35:58 | 2015-06-16T12:10:23Z | normanmaurer | val |
netty/netty/3880_3894 | netty/netty | netty/netty/3880 | netty/netty/3894 | [
"timestamp(timedelta=71.0, similarity=0.9854607127688096)"
] | 1644d733ecd388b0b3ee6403118f0f84e1102fe9 | 416cf3b35f2c8cf3726bff4090a767e38a033142 | [] | [
"Can we move this logic into `PrefaceDecoder`? One of the reasons why `FrameDecoder` was introduced was to avoid having a conditional statement evaluate on every `decode` which will only be relevant for the first (or first couple) decode(s).\n",
"I think my previous comment still applies https://github.com/netty... | 2015-06-17T20:20:08Z | [
"improvement"
] | Need better error when first HTTP/2 frame is not SETTINGS | Right now if the protocol upgrade to HTTP/2 fails, the server may send an HTTP 1.1 response leading to a frame parsing error at the client (due to frame length). It would be helpful for receipt of the first frame to verify that the frame type is SETTINGS, if not fail with a better error.
| [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java"
] | [
"codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java"
] | diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
index 4ce96fd0daa..042912eb6d2 100644
--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
+++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/Http2ConnectionHandler.java
@@ -14,6 +14,7 @@
*/
package io.netty.handler.codec.http2;
+import static io.netty.buffer.ByteBufUtil.hexDump;
import static io.netty.handler.codec.http2.Http2CodecUtil.HTTP_UPGRADE_STREAM_ID;
import static io.netty.handler.codec.http2.Http2CodecUtil.connectionPrefaceBuf;
import static io.netty.handler.codec.http2.Http2CodecUtil.getEmbeddedHttp2Exception;
@@ -22,8 +23,10 @@
import static io.netty.handler.codec.http2.Http2Error.PROTOCOL_ERROR;
import static io.netty.handler.codec.http2.Http2Exception.connectionError;
import static io.netty.handler.codec.http2.Http2Exception.isStreamError;
+import static io.netty.handler.codec.http2.Http2FrameTypes.SETTINGS;
import static io.netty.util.CharsetUtil.UTF_8;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
+import static java.lang.Math.min;
import static java.lang.String.format;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufUtil;
@@ -216,10 +219,10 @@ public boolean prefaceSent() {
@Override
public void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
try {
- if (readClientPrefaceString(in)) {
+ if (readClientPrefaceString(in) && verifyFirstFrameIsSettings(in)) {
// After the preface is read, it is time to hand over control to the post initialized decoder.
- Http2ConnectionHandler.this.byteDecoder = new FrameDecoder();
- Http2ConnectionHandler.this.byteDecoder.decode(ctx, in, out);
+ byteDecoder = new FrameDecoder();
+ byteDecoder.decode(ctx, in, out);
}
} catch (Throwable e) {
onException(ctx, e);
@@ -268,12 +271,15 @@ private boolean readClientPrefaceString(ByteBuf in) throws Http2Exception {
}
int prefaceRemaining = clientPrefaceString.readableBytes();
- int bytesRead = Math.min(in.readableBytes(), prefaceRemaining);
+ int bytesRead = min(in.readableBytes(), prefaceRemaining);
// If the input so far doesn't match the preface, break the connection.
if (bytesRead == 0 || !ByteBufUtil.equals(in, in.readerIndex(),
clientPrefaceString, clientPrefaceString.readerIndex(), bytesRead)) {
- throw connectionError(PROTOCOL_ERROR, "HTTP/2 client preface string missing or corrupt.");
+ String receivedBytes = hexDump(in, in.readerIndex(),
+ min(in.readableBytes(), clientPrefaceString.readableBytes()));
+ throw connectionError(PROTOCOL_ERROR, "HTTP/2 client preface string missing or corrupt. " +
+ "Hex dump for received bytes: %s", receivedBytes);
}
in.skipBytes(bytesRead);
clientPrefaceString.skipBytes(bytesRead);
@@ -287,6 +293,28 @@ private boolean readClientPrefaceString(ByteBuf in) throws Http2Exception {
return false;
}
+ /**
+ * Peeks at that the next frame in the buffer and verifies that it is a {@code SETTINGS} frame.
+ *
+ * @param in the inbound buffer.
+ * @return {@code} true if the next frame is a {@code SETTINGS} frame, {@code false} if more
+ * data is required before we can determine the next frame type.
+ * @throws Http2Exception thrown if the next frame is NOT a {@code SETTINGS} frame.
+ */
+ private boolean verifyFirstFrameIsSettings(ByteBuf in) throws Http2Exception {
+ if (in.readableBytes() < 4) {
+ // Need more data before we can see the frame type for the first frame.
+ return false;
+ }
+
+ byte frameType = in.getByte(in.readerIndex() + 3);
+ if (frameType != SETTINGS) {
+ throw connectionError(PROTOCOL_ERROR, "First received frame was not SETTINGS. " +
+ "Hex dump for first 4 bytes: %s", hexDump(in, in.readerIndex(), 4));
+ }
+ return true;
+ }
+
/**
* Sends the HTTP/2 connection preface upon establishment of the connection, if not already sent.
*/
| diff --git a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
index 7f44126aadb..4dc3feb9781 100644
--- a/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
+++ b/codec-http2/src/test/java/io/netty/handler/codec/http2/Http2ConnectionHandlerTest.java
@@ -30,12 +30,14 @@
import static org.mockito.Matchers.anyInt;
import static org.mockito.Matchers.anyLong;
import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.atLeastOnce;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
import static org.mockito.Mockito.when;
+
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.buffer.UnpooledByteBufAllocator;
@@ -47,9 +49,6 @@
import io.netty.channel.DefaultChannelPromise;
import io.netty.util.CharsetUtil;
import io.netty.util.concurrent.GenericFutureListener;
-
-import java.util.List;
-
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -60,6 +59,8 @@
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
+import java.util.List;
+
/**
* Tests for {@link Http2ConnectionHandler}
*/
@@ -202,22 +203,27 @@ public void serverReceivingInvalidClientPrefaceStringShouldHandleException() thr
}
@Test
- public void serverReceivingValidClientPrefaceStringShouldContinueReadingFrames() throws Exception {
+ public void serverReceivingClientPrefaceStringFollowedByNonSettingsShouldHandleException()
+ throws Exception {
when(connection.isServer()).thenReturn(true);
handler = newHandler();
- ByteBuf preface = connectionPrefaceBuf();
- ByteBuf prefacePlusSome = Unpooled.wrappedBuffer(new byte[preface.readableBytes() + 1]);
- prefacePlusSome.resetWriterIndex().writeBytes(preface).writeByte(0);
- handler.channelRead(ctx, prefacePlusSome);
- verify(decoder, times(2)).decodeFrame(eq(ctx), any(ByteBuf.class), Matchers.<List<Object>>any());
+
+ // Create a connection preface followed by a bunch of zeros (i.e. not a settings frame).
+ ByteBuf buf = Unpooled.buffer().writeBytes(connectionPrefaceBuf()).writeZero(10);
+ handler.channelRead(ctx, buf);
+ ArgumentCaptor<ByteBuf> captor = ArgumentCaptor.forClass(ByteBuf.class);
+ verify(frameWriter, atLeastOnce()).writeGoAway(eq(ctx), eq(0), eq(PROTOCOL_ERROR.code()),
+ captor.capture(), eq(promise));
+ assertEquals(0, captor.getValue().refCnt());
}
@Test
- public void serverReceivingValidClientPrefaceStringShouldOnlyReadWholeFrame() throws Exception {
+ public void serverReceivingValidClientPrefaceStringShouldContinueReadingFrames() throws Exception {
when(connection.isServer()).thenReturn(true);
handler = newHandler();
- handler.channelRead(ctx, connectionPrefaceBuf());
- verify(decoder).decodeFrame(any(ChannelHandlerContext.class),
+ ByteBuf prefacePlusSome = addSettingsHeader(Unpooled.buffer().writeBytes(connectionPrefaceBuf()));
+ handler.channelRead(ctx, prefacePlusSome);
+ verify(decoder, atLeastOnce()).decodeFrame(any(ChannelHandlerContext.class),
any(ByteBuf.class), Matchers.<List<Object>>any());
}
@@ -228,6 +234,7 @@ public void verifyChannelHandlerCanBeReusedInPipeline() throws Exception {
// Only read the connection preface...after preface is read internal state of Http2ConnectionHandler
// is expected to change relative to the pipeline.
ByteBuf preface = connectionPrefaceBuf();
+ handler.channelRead(ctx, preface);
verify(decoder, never()).decodeFrame(any(ChannelHandlerContext.class),
any(ByteBuf.class), Matchers.<List<Object>>any());
@@ -236,10 +243,9 @@ public void verifyChannelHandlerCanBeReusedInPipeline() throws Exception {
handler.handlerAdded(ctx);
// Now verify we can continue as normal, reading connection preface plus more.
- ByteBuf prefacePlusSome = Unpooled.wrappedBuffer(new byte[preface.readableBytes() + 1]);
- prefacePlusSome.resetWriterIndex().writeBytes(preface).writeByte(0);
+ ByteBuf prefacePlusSome = addSettingsHeader(Unpooled.buffer().writeBytes(connectionPrefaceBuf()));
handler.channelRead(ctx, prefacePlusSome);
- verify(decoder, times(2)).decodeFrame(eq(ctx), any(ByteBuf.class), Matchers.<List<Object>>any());
+ verify(decoder, atLeastOnce()).decodeFrame(eq(ctx), any(ByteBuf.class), Matchers.<List<Object>>any());
}
@Test
@@ -276,7 +282,8 @@ public void writeRstOnNonExistantStreamShouldSucceed() throws Exception {
handler = newHandler();
handler.resetStream(ctx, NON_EXISTANT_STREAM_ID, STREAM_CLOSED.code(), promise);
verify(frameWriter, never())
- .writeRstStream(any(ChannelHandlerContext.class), anyInt(), anyLong(), any(ChannelPromise.class));
+ .writeRstStream(any(ChannelHandlerContext.class), anyInt(), anyLong(),
+ any(ChannelPromise.class));
assertTrue(promise.isDone());
assertTrue(promise.isSuccess());
assertNull(promise.cause());
@@ -291,7 +298,8 @@ public void writeRstOnClosedStreamShouldSucceed() throws Exception {
// The stream is "closed" but is still known about by the connection (connection().stream(..)
// will return the stream). We should still write a RST_STREAM frame in this scenario.
handler.resetStream(ctx, STREAM_ID, STREAM_CLOSED.code(), promise);
- verify(frameWriter).writeRstStream(eq(ctx), eq(STREAM_ID), anyLong(), any(ChannelPromise.class));
+ verify(frameWriter).writeRstStream(eq(ctx), eq(STREAM_ID), anyLong(),
+ any(ChannelPromise.class));
}
@SuppressWarnings("unchecked")
@@ -346,7 +354,8 @@ public Void answer(InvocationOnMock invocation) throws Throwable {
handler.goAway(ctx, STREAM_ID, errorCode, data, promise);
verify(connection).goAwaySent(eq(STREAM_ID), eq(errorCode), eq(data));
- verify(frameWriter).writeGoAway(eq(ctx), eq(STREAM_ID), eq(errorCode), eq(data), eq(promise));
+ verify(frameWriter).writeGoAway(eq(ctx), eq(STREAM_ID), eq(errorCode), eq(data),
+ eq(promise));
verify(ctx).close();
assertEquals(0, data.refCnt());
}
@@ -358,7 +367,8 @@ public void canSendGoAwayFramesWithDecreasingLastStreamIds() throws Exception {
long errorCode = Http2Error.INTERNAL_ERROR.code();
handler.goAway(ctx, STREAM_ID + 2, errorCode, data.retain(), promise);
- verify(frameWriter).writeGoAway(eq(ctx), eq(STREAM_ID + 2), eq(errorCode), eq(data), eq(promise));
+ verify(frameWriter).writeGoAway(eq(ctx), eq(STREAM_ID + 2), eq(errorCode), eq(data),
+ eq(promise));
verify(connection).goAwaySent(eq(STREAM_ID + 2), eq(errorCode), eq(data));
promise = new DefaultChannelPromise(channel);
handler.goAway(ctx, STREAM_ID, errorCode, data, promise);
@@ -398,4 +408,12 @@ public void channelReadCompleteTriggersFlush() throws Exception {
private ByteBuf dummyData() {
return Unpooled.buffer().writeBytes("abcdefgh".getBytes(CharsetUtil.UTF_8));
}
+
+ private ByteBuf addSettingsHeader(ByteBuf buf) {
+ buf.writeMedium(Http2CodecUtil.SETTING_ENTRY_LENGTH);
+ buf.writeByte(Http2FrameTypes.SETTINGS);
+ buf.writeByte(0);
+ buf.writeInt(0);
+ return buf;
+ }
}
| val | train | 2015-06-17T06:36:28 | 2015-06-10T17:52:55Z | nmittler | val |
netty/netty/3915_3917 | netty/netty | netty/netty/3915 | netty/netty/3917 | [
"timestamp(timedelta=628.0, similarity=0.9322608455836241)"
] | b5337abe2aee8e20153900fb7024f989555cf142 | 29b79393ec7a19ba687d49691cab80161d40200d | [
"@louiscryan sounds like a bug... would you mind creating a PR with fix + unit test?\n",
"already in progress... :)\n",
"@louiscryan yay :) Ping me once ready for review\n",
"Cherry-picked, thanks!\n"
] | [] | 2015-06-26T21:45:47Z | [
"defect"
] | FixedCompositeByteBuf is broken copying bytes to direct buffers | @normanmaurer @trustin
In methods like
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
it uses
int localLength = Math.min(length, s.capacity() - (index - adjustment));
instead of
int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
This looks like a rough copy and paste from CompositeByteBuf but unlike CompositeByteBuf it does not slice() buffers before they are added which makes capacity() == readableBytes()
This issue did not appear when copying bytes to an Unpooled buffer as that used
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
which correctly used readableBytes.
Test to demonstrate....
class FixedCompositeByteBufTest {
....
@Test
public void testCopyingToOtherBuffer() {
ByteBuf buf1 = Unpooled.directBuffer(10);
ByteBuf buf2 = Unpooled.buffer(10);
ByteBuf buf3 = Unpooled.directBuffer(10);
buf1.writeBytes("a".getBytes(Charset.defaultCharset()));
buf2.writeBytes("b".getBytes(Charset.defaultCharset()));
buf3.writeBytes("c".getBytes(Charset.defaultCharset()));
ByteBuf composite = unmodifiableBuffer(buf1, buf2, buf3);
ByteBuf copy = Unpooled.directBuffer(3);
ByteBuf copy2 = Unpooled.buffer(3);
copy.setBytes(0, composite, 0, 3);
copy2.setBytes(0, composite, 0, 3);
copy.writerIndex(3);
copy2.writerIndex(3);
assertEquals(0, ByteBufUtil.compare(copy, composite));
assertEquals(0, ByteBufUtil.compare(copy2, composite));
assertEquals(0, ByteBufUtil.compare(copy, copy2));
}
}
Looking at the code it seems like this issue would occur for other methods too
public ByteBuf getBytes(int index, OutputStream out, int length)
public ByteBuffer[] nioBuffers(int index, int length)
| [
"buffer/src/main/java/io/netty/buffer/FixedCompositeByteBuf.java"
] | [
"buffer/src/main/java/io/netty/buffer/FixedCompositeByteBuf.java"
] | [
"buffer/src/test/java/io/netty/buffer/FixedCompositeByteBufTest.java"
] | diff --git a/buffer/src/main/java/io/netty/buffer/FixedCompositeByteBuf.java b/buffer/src/main/java/io/netty/buffer/FixedCompositeByteBuf.java
index 816c0b41de9..af28d682d8d 100644
--- a/buffer/src/main/java/io/netty/buffer/FixedCompositeByteBuf.java
+++ b/buffer/src/main/java/io/netty/buffer/FixedCompositeByteBuf.java
@@ -343,7 +343,7 @@ public ByteBuf getBytes(int index, ByteBuffer dst) {
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
- int localLength = Math.min(length, s.capacity() - (index - adjustment));
+ int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
dst.limit(dst.position() + localLength);
s.getBytes(index - adjustment, dst);
index += localLength;
@@ -372,7 +372,7 @@ public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
- int localLength = Math.min(length, s.capacity() - (index - adjustment));
+ int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
s.getBytes(index - adjustment, dst, dstIndex, localLength);
index += localLength;
dstIndex += localLength;
@@ -414,7 +414,7 @@ public ByteBuf getBytes(int index, OutputStream out, int length) throws IOExcept
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
- int localLength = Math.min(length, s.capacity() - (index - adjustment));
+ int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
s.getBytes(index - adjustment, out, localLength);
index += localLength;
length -= localLength;
@@ -491,7 +491,7 @@ public ByteBuffer[] nioBuffers(int index, int length) {
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
- int localLength = Math.min(length, s.capacity() - (index - adjustment));
+ int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
switch (s.nioBufferCount()) {
case 0:
throw new UnsupportedOperationException();
| diff --git a/buffer/src/test/java/io/netty/buffer/FixedCompositeByteBufTest.java b/buffer/src/test/java/io/netty/buffer/FixedCompositeByteBufTest.java
index 7dea5ebf567..812e140b25c 100644
--- a/buffer/src/test/java/io/netty/buffer/FixedCompositeByteBufTest.java
+++ b/buffer/src/test/java/io/netty/buffer/FixedCompositeByteBufTest.java
@@ -23,6 +23,7 @@
import java.nio.ByteBuffer;
import java.nio.ReadOnlyBufferException;
import java.nio.channels.ScatteringByteChannel;
+import java.nio.charset.Charset;
import static io.netty.buffer.Unpooled.*;
import static io.netty.util.ReferenceCountUtil.*;
@@ -252,4 +253,58 @@ private static void testGatheringWritesSingleBuf(ByteBuf buf1) throws Exception
buf.release();
}
+
+ @Test
+ public void testCopyingToOtherBuffer() {
+ ByteBuf buf1 = Unpooled.directBuffer(10);
+ ByteBuf buf2 = Unpooled.buffer(10);
+ ByteBuf buf3 = Unpooled.directBuffer(10);
+ buf1.writeBytes("a".getBytes(Charset.defaultCharset()));
+ buf2.writeBytes("b".getBytes(Charset.defaultCharset()));
+ buf3.writeBytes("c".getBytes(Charset.defaultCharset()));
+ ByteBuf composite = unmodifiableBuffer(buf1, buf2, buf3);
+ ByteBuf copy = Unpooled.directBuffer(3);
+ ByteBuf copy2 = Unpooled.buffer(3);
+ copy.setBytes(0, composite, 0, 3);
+ copy2.setBytes(0, composite, 0, 3);
+ copy.writerIndex(3);
+ copy2.writerIndex(3);
+ assertEquals(0, ByteBufUtil.compare(copy, composite));
+ assertEquals(0, ByteBufUtil.compare(copy2, composite));
+ assertEquals(0, ByteBufUtil.compare(copy, copy2));
+ }
+
+ @Test
+ public void testCopyingToOutputStream() throws IOException {
+ ByteBuf buf1 = Unpooled.directBuffer(10);
+ ByteBuf buf2 = Unpooled.buffer(10);
+ ByteBuf buf3 = Unpooled.directBuffer(10);
+ buf1.writeBytes("a".getBytes(Charset.defaultCharset()));
+ buf2.writeBytes("b".getBytes(Charset.defaultCharset()));
+ buf3.writeBytes("c".getBytes(Charset.defaultCharset()));
+ ByteBuf composite = unmodifiableBuffer(buf1, buf2, buf3);
+ ByteBuf copy = Unpooled.directBuffer(3);
+ ByteBuf copy2 = Unpooled.buffer(3);
+ composite.getBytes(0, new ByteBufOutputStream(copy), 3);
+ composite.getBytes(0, new ByteBufOutputStream(copy2), 3);
+ assertEquals(0, ByteBufUtil.compare(copy, composite));
+ assertEquals(0, ByteBufUtil.compare(copy2, composite));
+ assertEquals(0, ByteBufUtil.compare(copy, copy2));
+ }
+
+ @Test
+ public void testExtractNioBuffers() {
+ ByteBuf buf1 = Unpooled.directBuffer(10);
+ ByteBuf buf2 = Unpooled.buffer(10);
+ ByteBuf buf3 = Unpooled.directBuffer(10);
+ buf1.writeBytes("a".getBytes(Charset.defaultCharset()));
+ buf2.writeBytes("b".getBytes(Charset.defaultCharset()));
+ buf3.writeBytes("c".getBytes(Charset.defaultCharset()));
+ ByteBuf composite = unmodifiableBuffer(buf1, buf2, buf3);
+ ByteBuffer[] byteBuffers = composite.nioBuffers(0, 3);
+ assertEquals(3, byteBuffers.length);
+ assertEquals(1, byteBuffers[0].limit());
+ assertEquals(1, byteBuffers[1].limit());
+ assertEquals(1, byteBuffers[2].limit());
+ }
}
| val | train | 2015-06-24T21:09:39 | 2015-06-26T19:37:21Z | louiscryan | val |
netty/netty/3922_3923 | netty/netty | netty/netty/3922 | netty/netty/3923 | [
"timestamp(timedelta=16218.0, similarity=0.8750788523995916)"
] | 46b2eff75701f6ffe3a12b926ccef6764b77d2cc | 8ac43e52e08c0c5e22448c606deb30ca70ba0efc | [
"@0mok - Should this be closed?\n",
"Yes.\n"
] | [] | 2015-06-29T14:24:38Z | [] | `OpenSslServerContext.setTicketKeys` does not works in 3.x | There are invalid check arguments in 'OpenSslServerContext.setTicketKeys'.
```
public void setTicketKeys(byte[] keys) {
if (keys != null) {
throw new NullPointerException("keys");
}
```
| [
"src/main/java/org/jboss/netty/handler/ssl/OpenSslServerContext.java"
] | [
"src/main/java/org/jboss/netty/handler/ssl/OpenSslServerContext.java"
] | [] | diff --git a/src/main/java/org/jboss/netty/handler/ssl/OpenSslServerContext.java b/src/main/java/org/jboss/netty/handler/ssl/OpenSslServerContext.java
index 15a15ac9656..f6e7b3ef7ab 100644
--- a/src/main/java/org/jboss/netty/handler/ssl/OpenSslServerContext.java
+++ b/src/main/java/org/jboss/netty/handler/ssl/OpenSslServerContext.java
@@ -331,7 +331,7 @@ public SSLEngine newEngine(String peerHost, int peerPort) {
* Sets the SSL session ticket keys of this context.
*/
public void setTicketKeys(byte[] keys) {
- if (keys != null) {
+ if (keys == null) {
throw new NullPointerException("keys");
}
SSLContext.setSessionTicketKeys(ctx, keys);
| null | train | train | 2015-05-12T11:23:56 | 2015-06-29T14:19:35Z | 0mok | val |
netty/netty/3896_3930 | netty/netty | netty/netty/3896 | netty/netty/3930 | [
"timestamp(timedelta=48.0, similarity=0.8425013773895875)"
] | bb0b86ce50073a7de7f8f5918774b7c76bf4f99f | 4151eaa5b30fd96deaff7d2770f78e3451c58ee5 | [
"@sefler1987 I think you are right... We should call duplicate on the buffer first. Let me fix it\n",
"Thank you so much~\n",
"Fixed by https://github.com/netty/netty/pull/3930\n"
] | [] | 2015-07-03T12:14:17Z | [
"defect"
] | "ByteBuf copiedBuffer(ByteBuffer buffer)" in Netty 4.0.24 is NOT thread safe | I'm not sure why this static method is not thread safe, but it bothers me a lot during the development. As the following code shows, the method firstly copy the position out and then set it back after copying. If there are two threads doing the same thing, the position may be stained.
```
public static ByteBuf copiedBuffer(ByteBuffer buffer) {
int length = buffer.remaining();
if (length == 0) {
return EMPTY_BUFFER;
}
byte[] copy = new byte[length];
int position = buffer.position();
try {
buffer.get(copy);
} finally {
buffer.position(position);
}
return wrappedBuffer(copy).order(buffer.order());
}
```
| [
"buffer/src/main/java/io/netty/buffer/Unpooled.java"
] | [
"buffer/src/main/java/io/netty/buffer/Unpooled.java"
] | [] | diff --git a/buffer/src/main/java/io/netty/buffer/Unpooled.java b/buffer/src/main/java/io/netty/buffer/Unpooled.java
index 0c071071cbc..742c17f49cb 100644
--- a/buffer/src/main/java/io/netty/buffer/Unpooled.java
+++ b/buffer/src/main/java/io/netty/buffer/Unpooled.java
@@ -392,13 +392,11 @@ public static ByteBuf copiedBuffer(ByteBuffer buffer) {
return EMPTY_BUFFER;
}
byte[] copy = new byte[length];
- int position = buffer.position();
- try {
- buffer.get(copy);
- } finally {
- buffer.position(position);
- }
- return wrappedBuffer(copy).order(buffer.order());
+ // Duplicate the buffer so we not adjust the position during our get operation.
+ // See https://github.com/netty/netty/issues/3896
+ ByteBuffer duplicate = buffer.duplicate();
+ duplicate.get(copy);
+ return wrappedBuffer(copy).order(duplicate.order());
}
/**
@@ -561,11 +559,11 @@ public static ByteBuf copiedBuffer(ByteBuffer... buffers) {
byte[] mergedArray = new byte[length];
for (int i = 0, j = 0; i < buffers.length; i ++) {
- ByteBuffer b = buffers[i];
+ // Duplicate the buffer so we not adjust the position during our get operation.
+ // See https://github.com/netty/netty/issues/3896
+ ByteBuffer b = buffers[i].duplicate();
int bLen = b.remaining();
- int oldPos = b.position();
b.get(mergedArray, j, bLen);
- b.position(oldPos);
j += bLen;
}
| null | test | train | 2015-06-27T21:21:36 | 2015-06-18T04:59:02Z | seflerZ | val |
netty/netty/3780_3931 | netty/netty | netty/netty/3780 | netty/netty/3931 | [
"timestamp(timedelta=37.0, similarity=0.8435129399591724)"
] | 287ac6d328a23ae5dd098206dd7c8df9f0e2d0a1 | ff1bdda2bbb7226e2ebfd6e50bc30400fd2e5822 | [
"@zoowar the question is how we should propagate here as if initChannel(...) throws an exception chances are good that you will not have an handler in the pipeline that can handle the exception.\n",
"@normanmaurer When there is no exceptionCaught implemented by any of the pipeline handlers, the framework handles ... | [
"Instantiate this exception outside the channel scope so you can do `assertSame` at the end of the test?\n"
] | 2015-07-03T12:15:52Z | [
"defect"
] | ChannelInitializer does not propagate Exceptions | Netty version: 5.0.0.Alpha2 (but others as well)
Context:
When an exception occurs in a user defined initChannel, it is not propagated through the framework so that it can be handled with exceptionCaught. Instead, channelRegistered logs a warning and dumps a stack trace.
Steps to reproduce:
Throw a generic exception in an initChannel from the netty examples should do the trick.
| [
"transport/src/main/java/io/netty/channel/ChannelInitializer.java"
] | [
"transport/src/main/java/io/netty/channel/ChannelInitializer.java"
] | [
"transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java"
] | diff --git a/transport/src/main/java/io/netty/channel/ChannelInitializer.java b/transport/src/main/java/io/netty/channel/ChannelInitializer.java
index f346f553cda..5db2c291805 100644
--- a/transport/src/main/java/io/netty/channel/ChannelInitializer.java
+++ b/transport/src/main/java/io/netty/channel/ChannelInitializer.java
@@ -56,29 +56,33 @@ public abstract class ChannelInitializer<C extends Channel> extends ChannelInbou
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
- * @throws Exception is thrown if an error occurs. In that case the {@link Channel} will be closed.
+ * @throws Exception is thrown if an error occurs. In that case it will be handled by
+ * {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default close
+ * the {@link Channel}.
*/
protected abstract void initChannel(C ch) throws Exception;
@Override
@SuppressWarnings("unchecked")
public final void channelRegistered(ChannelHandlerContext ctx) throws Exception {
- ChannelPipeline pipeline = ctx.pipeline();
- boolean success = false;
+ initChannel((C) ctx.channel());
+ ctx.pipeline().remove(this);
+ ctx.fireChannelRegistered();
+ }
+
+ /**
+ * Handle the {@link Throwable} by logging and closing the {@link Channel}. Sub-classes may override this.
+ */
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ logger.warn("Failed to initialize a channel. Closing: " + ctx.channel(), cause);
try {
- initChannel((C) ctx.channel());
- pipeline.remove(this);
- ctx.fireChannelRegistered();
- success = true;
- } catch (Throwable t) {
- logger.warn("Failed to initialize a channel. Closing: " + ctx.channel(), t);
- } finally {
+ ChannelPipeline pipeline = ctx.pipeline();
if (pipeline.context(this) != null) {
pipeline.remove(this);
}
- if (!success) {
- ctx.close();
- }
+ } finally {
+ ctx.close();
}
}
}
| diff --git a/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java b/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java
index 40749b59db5..11503bd90fe 100644
--- a/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java
+++ b/transport/src/test/java/io/netty/channel/DefaultChannelPipelineTest.java
@@ -21,6 +21,7 @@
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandler.Sharable;
+import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.channel.local.LocalAddress;
import io.netty.channel.local.LocalChannel;
import io.netty.channel.local.LocalEventLoopGroup;
@@ -535,6 +536,29 @@ public void testLastHandlerEmptyPipeline() throws Exception {
assertNull(pipeline.last());
}
+ @Test(timeout = 5000)
+ public void testChannelInitializerException() throws Exception {
+ final IllegalStateException exception = new IllegalStateException();
+ final AtomicReference<Throwable> error = new AtomicReference<Throwable>();
+ final CountDownLatch latch = new CountDownLatch(1);
+ EmbeddedChannel channel = new EmbeddedChannel(new ChannelInitializer<Channel>() {
+ @Override
+ protected void initChannel(Channel ch) throws Exception {
+ throw exception;
+ }
+
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
+ super.exceptionCaught(ctx, cause);
+ error.set(cause);
+ latch.countDown();
+ }
+ });
+ latch.await();
+ assertFalse(channel.isActive());
+ assertSame(exception, error.get());
+ }
+
private static int next(AbstractChannelHandlerContext ctx) {
AbstractChannelHandlerContext next = ctx.next;
if (next == null) {
| test | train | 2015-07-07T08:47:18 | 2015-05-13T17:53:16Z | zoowar | val |
Subsets and Splits
Java Code Detection in Problems
Identifies and categorizes entries in the dataset that are likely related to Java programming, providing insights into the prevalence and type of Java content in the problem statements.