text
stringlengths
0
1.32k
[1119.94 --> 1128.20] And even over multiplex requests, you know, all of those requests and responses share the same state tables.
[1128.86 --> 1133.86] So it adds an additional layer of complexity that just didn't exist previously.
[1133.86 --> 1134.06] Wasn't there before.
[1135.18 --> 1138.86] And personally, I don't think it was needed, right?
[1138.90 --> 1140.12] I think that there were other ways.
[1140.12 --> 1140.32] You could have done differently.
[1140.32 --> 1144.90] I actually, you know, like I said, I worked on the spec.
[1145.08 --> 1145.24] Right.
[1145.24 --> 1145.92] I was one of the co-authors.
[1146.10 --> 1154.94] And I had a proposal for just using a more efficient binary encoding, you know, of certain headers like dates, right?
[1155.00 --> 1160.82] Or instead of, you know, representing numbers as text, representing them, you know, is binary, right?
[1160.82 --> 1160.86] Right.
[1162.98 --> 1172.24] The compression ratios weren't as good, but you could transmit that data without incurring the cost of managing the state, right?
[1172.26 --> 1178.76] So it would be just like what H1 has today where you're still sending it every time, but you're sending less every time.
[1179.96 --> 1181.60] Makes sense to shrink it rather than...
[1181.60 --> 1181.82] Right.
[1181.94 --> 1182.76] Shrink it, yeah.
[1183.06 --> 1183.80] Rather than adding the state.
[1183.80 --> 1188.56] I kind of agree with you on the state because it seems like it's adding this extra layer of like...
[1188.56 --> 1188.74] Right.
[1189.10 --> 1191.72] It's almost like somebody shakes your hand and doesn't let it go.
[1192.04 --> 1192.56] Well, yeah.
[1192.66 --> 1194.44] And in a lot of ways, that's exactly what it is.
[1194.96 --> 1199.74] Now, Google has a ton of experience with Speedy, right?
[1199.74 --> 1205.20] And, you know, a lot of what's in HP2 came out of the experience, you know, came out of the work that Google did on Speedy.
[1205.38 --> 1209.28] And I have a huge amount of respect for everything that they did and provided.
[1210.12 --> 1211.86] HPAC also came out of Google.
[1211.86 --> 1217.22] So they did a ton of research in terms of what would work, right?
[1217.32 --> 1224.00] And they had concluded that state-bore hydrocompression was the only way to get the, you know, like real benefits out of H2.
[1225.04 --> 1227.30] You know, I disagreed with some of those conclusions.
[1227.56 --> 1231.06] But, you know, the working group decided, you know what, this is what we're going to move forward with.
[1231.10 --> 1232.40] And that's what they did.
[1232.54 --> 1234.76] And at this point, it's like, I don't like it.
[1235.36 --> 1237.52] But, you know, that's what it is.
[1237.78 --> 1240.12] And, you know, that's what we're moving forward on.
[1240.12 --> 1250.58] So some of the other things there, in terms of, like, additional complexity, is H2 has its own flow control, has its own prioritization.
[1250.80 --> 1253.04] You can have streams depend on other streams.
[1253.14 --> 1257.30] And when you set the priority on one, it, you know, sets the priority for the entire graph.
[1257.30 --> 1263.96] You know, it's, you know, there's just a lot there that just doesn't exist in H1, right?
[1264.10 --> 1268.24] That, you know, how much of that do we expose to developers, right?
[1268.34 --> 1269.30] Like, you know, in Node.
[1269.48 --> 1271.22] We have to provide an API for all this stuff.
[1271.66 --> 1273.30] Do we provide an API for flow control?
[1273.90 --> 1275.74] That doesn't exist in Node currently, right?
[1275.88 --> 1277.96] I mean, how would we even do that in a way that's efficient?
[1278.82 --> 1282.06] About prioritization, how do we, you know, what kind of APIs do we do there?
[1282.06 --> 1293.28] This additional complexity is something that, as Node core looking at this, we have to decide how much of that do we pass on to the user versus how much of that do we do ourselves.
[1293.66 --> 1301.18] If we do it all ourselves, we're providing fewer knobs for, you know, the users to turn, to tune things.
[1301.56 --> 1305.68] And we're making it less interesting for them because we're hiding some of those features.
[1305.88 --> 1306.92] We're hiding those capabilities.
[1306.92 --> 1310.16] And is that the right thing to do, right?
[1310.30 --> 1316.34] So the additional complexity kind of, you know, it's not something we can easily deal with.
[1316.42 --> 1317.92] It's something we have to kind of.
[1318.54 --> 1319.34] It's right there in your face.
[1319.34 --> 1320.28] Right there in your face.
[1320.44 --> 1321.42] You have to do something about it.
[1321.98 --> 1326.20] So stateless compression, that's one thing.
[1328.10 --> 1329.58] Maybe give me the flip side of that.
[1329.58 --> 1336.46] Like what's, I guess you've already kind of described it to a bit with the complexity, but what's the worst that could happen?
[1336.46 --> 1341.00] The server affinity issue is actually the biggest issue here.
[1342.00 --> 1349.70] A lot of the proxy software vendors had some real significant problems with H2 as it was being defined.
[1350.08 --> 1353.88] And you had a lot of criticism being put forth.
[1354.22 --> 1359.44] I can't remember his name, but the author of, I believe it's the Varnish Proxy is very public.
[1359.44 --> 1359.66] Yeah.
[1360.30 --> 1362.78] And his discontent with the protocol.
[1363.82 --> 1370.28] Because of the binary framing and the way the headers are actually, you know, transmitted, right?
[1370.28 --> 1381.14] You can't do what a lot of the proxies do currently, which is just kind of read the first few lines, determine, you know, where you're going to route that thing to, then stop and just forward it on.
[1381.34 --> 1381.60] Right?
[1382.22 --> 1385.28] Which is a super efficient way of doing it.
[1385.28 --> 1388.96] You have to process the entire block of headers, right?
[1389.04 --> 1392.10] Then make the determination of whether you're going to do anything with it or not.
[1392.20 --> 1398.52] At that point, you basically have to terminate that connection and open another connection, you know, to your backend.
[1398.86 --> 1403.72] And you have, so that proxy is actually having four state tables for compression.
[1403.72 --> 1411.98] And then a lot more stuff that they're having to do that that existing proxy middleware currently doesn't have to do, right?
[1412.32 --> 1413.86] So, you know.
[1414.12 --> 1415.16] I can see why you're against it.
[1415.54 --> 1416.70] Well, you know, it's.
[1417.26 --> 1418.00] It could have just gone the other way.
[1418.10 --> 1419.96] It would just shrunk it instead of.
[1420.50 --> 1422.52] It's not the same thing back and forth, but just shrink it.
[1422.86 --> 1425.68] It added, you know, it added a lot of complexity.
[1426.40 --> 1427.92] What are the plus sides of this complexity?
[1428.08 --> 1430.68] Like, you're talking about the bad side, but what's the performance?
[1431.24 --> 1431.60] Performance.
[1431.60 --> 1435.02] It's using that socket much more efficiently.
[1436.10 --> 1443.40] You know, I was doing a peak load benchmark here the other day with, you know, just a development image of H2 in core.
[1444.18 --> 1446.92] I was serving 100,000 requests at the server.
[1447.48 --> 1451.88] There was 50 concurrent clients going over eight threads, right?
[1451.96 --> 1456.20] So just as much, just throw a bunch of stuff at the server and see what happens, right?
[1456.22 --> 1457.46] See how quickly it can respond.
[1457.46 --> 1465.16] With the H1 implementation in core currently, I was able to get 21,000 requests per second doing that.
[1465.44 --> 1468.18] But 15% of them just failed, right?
[1468.28 --> 1470.88] Where Node just didn't respond, right?
[1470.88 --> 1474.58] And a lot of that has to do with, I was running a test on OSX.
[1474.68 --> 1479.22] There's some issues there with assigning threads, you know, how quickly can assign threads.
[1479.52 --> 1484.48] And, you know, when we get an extreme high load, you can run into some issues.
[1485.30 --> 1488.90] With H2, I was able to get 18,000 requests per second.
[1488.90 --> 1492.94] And so fewer transaction rate, but 100% of them succeeded, right?
[1493.40 --> 1496.18] And it was using fewer sockets.
[1496.36 --> 1497.46] Now, it was keeping them open longer.