{"_id":"q-en-7710bis-228772fd3523d0efa7c7bd31a958f920ac1e066c02e23add7c5c43e8bc26a47b","text":"Note that I do need you both to confirm that you have fulfilled IPR obligations.\nThank you!"} {"_id":"q-en-ack-frequency-1de67cffa29ef23e6568c3b08a7e28194496055a0a102719d311a0e06436c7d7","text":"Connection Migration .... \" it sends or it can simply send an IMMEDIATE_ACK frame\" What is the intended difference between the words \"it sends\" or \"it can simply send\"?"} {"_id":"q-en-ack-frequency-f6fca47092480459514087e075a264c3357be34c9934869b0402edb94dda02ca","text":"Provides more details on why excluding maxackdelay when Ignore Order is used may cause premature PTOs.\nI'm merging this -- I think this is good, but if we need to add more cautionary text, we can still do that.\nWhen the number of in-flight ack-eliciting packets is larger than the ACK-Eliciting Threshold, an endpoint can expect that the peer will not need to wait for its maxackdelay period before sending an acknowledgement. In such cases, the endpoint MAY therefore exclude the peer's 'maxackdelay' from its PTO calculation. Note that this optimization requires some care in implementation, since it can cause premature PTOs under packet loss when ignoreorder is enabled. Isn't this oversimplifying things so that the aspect of time until an ACK is expected to be sent is removed? To me it is not that maxackdelay is removed completely from URL PTO = smoothedrtt + max(4*rttvar, kGranularity) + maxackdelay Instead it is replaced with expected time until the sender have sent ACK-Eliciting Threshold number of packets when that is less than maxackdelay that is configured. Secondly, are you expecting the sender to make an estimate of how many packets more are received before an ACK is sent, i.e. care about what phase it is currently when calculating the PTO in regards to which packet is expected to trigger the next ACK? However, that is clearly prone to be wrong if there is a loss occurring currently who's loss indicating ack has not yet been received by the sender. So it appears to me that this allowance is by default creating a situation which will cause premature PTOs, and it would be better to put some restrictions on it.\nOn the second point, the sender always knows how many packets are outstanding. So, it always knows whether the PTO computation -- which is always on the latest sent packet -- should include maxackdelay or not. That said, you are right the ignoreorder causes trouble here, so how about \"MAY therefore exclude the peer's 'maxackdelay' from its PTO calculation unless ignoreorder is enabled. When ignore_order is enabled, premature PTOs are significantly more likely under packet loss.\""} {"_id":"q-en-ack-frequency-f4a23e2a6dcb8c58cfaf3e723efc7f0b0856391fa770fa1b8b4e932cca10ff45","text":"I tried to work in what I thought were the important parts of Gorry's suggested text, PTAL if I captured everything.\nIt's also true that for many RF media, the performance of the forward path can be impacted by the rate of acknowledgment packets when the link bandwidth is shared.\n“Severely asymmetric link technologies, such as DOCSIS, LTE, and satellite links, connection throughput in the data direction becomes constrained when the reverse bandwidth is filled by acknowledgment packets. When traversing such links, reducing the number of acknowledgments allows connection throughput to scale much further.” I’m going to push back again on this as largely wrong. Links such as DOCSIS, LTE, satellite and WiFi are asymmetric. But it’s not just that the design decisions made for these types of link resulted in asymmetric capacity (throughput limit), the designs also share the capacity, and it is important to mitigate the impact on that sharing of capacity, than purely thinking of it items of bytes/sec of capacity used. A lower feedback rate can help avoid inducing flow starvation to other flows that share resources along the path they use.On mobile devices sending data consumes RF and battery power and is more costly for a host of reasons not just asymmetry when present.\nNAME -- I don't follow why you say that the statement is wrong. It seems to me that you think that the statement is inadequate. Would you mind responding here with some text you would prefer instead, and I'm happy to wordsmith and turn it into a PR?\nI suggest some text something like: For some link technologies (e.g., DOCSIS, LTE, WiFi, and satellite links) transmission of acknowledgment packets contribute to link utilization and can incur significant forwarding costs. In some cases, the connection throughput in the data direction can become constrained when the reverse bandwidth is filled by acknowledgment packets. In other cases, the rate of acknowledgment packets can impact link efficiency (e.g. transmission opportunities or battery life). A lower ACK frequency can be beneficial when using paths that include such links.\nNAME -- Is the text proposal in the issue enough for you to make a PR?\nThis is much better."} {"_id":"q-en-ack-frequency-fb4f9d30b1a242eea453e33a16c86551dfa596f64cdef778a93b992a3543dfbf","text":"Sec #ack-frequency-frame says: and Sec #out-of-order says:\nIt seems optimal to default it to 3, because that's the default packet threshold, but the principle of least surprise argues for 1 (ie: no behavior change until an ACK_FREQUENCY frame is received), so I lean toward 1.\nThis PR adds a SHOULD NOT, but it seems sensible."} {"_id":"q-en-ack-frequency-0c4fc45cc1caf17631e75a22dbb3baacdc223b7c950318dbc3b55341ed30afb4","text":"We have inconsistent definitions for Ack-Eliciting Threshold, which probably has people implementing this differently. In Section 4: In Section 7: The intended definition is the one in Section 4, so fixes and makes this consistent.\nThe document uses MUST in multiple places, however, as discussed in the new proposed text for the security considerations (see PR ) the requester cannot enforce any ACK strategy and there might be reason like practical limitation or an overload situation where the ACK sender simply decides to send less ACKs. So the question is really can to requirements be MUSTs or is SHOULD more realistic and correct?"} {"_id":"q-en-ack-frequency-65adebbb180f8a1b8a0d6548616c5f83986a577321a16b1d00880e6499891102","text":"This PR changes a few things: Introduces a TP for negotiating extension use Changes to Makes the fields required Specifies invalid values\nI'm merging this, let's address issues in later PRs.\nI think it'll be easier to make both parameters required. Not doing so saves 1 to 2 bytes, which is really not much in a frame I expect will be sent a few times a connection. Also, I think 0 should be an invalid value for both params, and 1 should be invalid for the number of packets, though I can see someone arguing that point.\naddresses these issues, with the exception that 1 is allowed for packet tolerance. I think that's actually useful for some startup schemes, so we shouldn't disallow it."} {"_id":"q-en-acme-284964f4a11d3c07c829660b0d7f16183ab73b3d257144e74fe6ba40672e1b16","text":"This uses two JWS signatures to achieve the two-signature property during rollover, instead of using extra layers of encoding and embedding. It also simplifies the logic and splits it out into its own resource."} {"_id":"q-en-acme-1472af68817dc1dc5a8d76f64ea12430ed448f1c01bb345116de59d4b85efece","text":"what's the difference between \"deactivated\" and \"revoked\"?\nDeactivated is done by the user; revoked is done by an administrator.\nOK, so we need to either say that or use clearer terms :) I talked with NAME offline, and I think he's going to propose something."} {"_id":"q-en-acme-31cc97c68018fe7c33a6cd17d810c4ba10dfb47b7fc1c264711f9aaab02efc3c","text":"Based on the acme mailing list discussion[0] on \"proactive\" vs \"on-finalization\" issuance it seems there is more support for the proactive approach. Speaking on behalf of Let's Encrypt and the Boulder developers we're willing to compromise and support proactive issuance as the sole issuance method in order to simplify the protocol & implementations. This PR changes the language around certificate issuance to reflect that the server MUST issue the certificate proactively once all the required challenges for an application have been fulfilled. The open issue corresponding to making this decision is removed now that there is one supported option. In an effort to close out open questions around applications this commit also removes the open issue around multiple certificates per application. If there is a demand for automating repeat issuance there should be a new thread started with a concrete proposal. 0: URL\nNAME apologies! I missed your comments until today. Both are resolved in URL Thank you for the feedback.\nOh! They were fresh and I misread the dates. Not such a bad turn around after all :-)\nNAME Fantastic, thanks. Merging. Hi folks, I wanted to revisit a discussion on the server proactively issuing certificates. Section 4, \"Protocol Overview\"[0] currently has a paragraph that reads: the the application be to the There is also an \"Open Issue\" remark related to proactive issuance in Section [1]. I think the specification should decide whether proactive issuance or on-finalization issuance should be used instead of allowing both with no indication to the client which will happen. It's the preference of Let's Encrypt that the specification only support on-finalization issuance. This is cheaper to implement and seems to be the path of least surprise. It also seems to better support large volume issuance when the client may want explicit control over when the certificates for multiple applications are issued. In the earlier \"Re: [Acme] Preconditions\" thread[2] Andrew Ayer seems to agree that there should be only one issuance method to simplify client behaviours, though he favours proactive issuance instead of on-finalization issuance. While it saves a round-trip message I'm not sure that this alone is convincing enough to choose proactive issuance as the one issuance method. What are the thoughts of other list members? Daniel/cpu [0] URL [1] URL [2] URL"} {"_id":"q-en-acme-bcc1584545e5fb32cc44ef6a89005bcd4b079539f8d77f598ab5ed8c37ac7245","text":"In the term \"registration\" was changed to \"account\". In and in some of the remaining references to \"registration\" were removed, but some 'new-reg' and references to \"registration\" remained. This PR removes, hopefully, the last references to \"registration\"."} {"_id":"q-en-acme-8a84ab0d6d27677388c00e4a94e3debeb2b4fda718309250517ce58a20d85036","text":"This PR is a first go at addressing the following points I raised on the mailing list.\nThis allows us to address the account URL recovery use case in without having special handling for empty payloads."} {"_id":"q-en-acme-e3c2800e0f9ce09a90b5c7442ff7cc78755a2626128ad7b961b398377693b28a","text":"This change replaces \"shortly\" in the normative language of Section 7.4, Applying for Certificate Issuance, with the more precise expectation that the server begin the issuance process, as discussed on the mailing list. URL We could say \"MUST begin the issuance process\" The main things on my mind that could delay issuance slightly: Submitting to CT Checking CAA Internal queuing for available capacity Manual vetting I think \"MUST begin\" covers for all of those, while allowing some vagueness as to how long they will take. On 03/22/2017 09:39 AM, Daniel McCarney wrote:"} {"_id":"q-en-acme-d2f8ed9d3749c2a35a7189051fcb67e11aef9991fc26dbc9c555e610f90a6213","text":"Is 200 really correct? This seems like a 404 maybe? Also we should specify the body (empty).\nWould a problem document be appropriate (with either a pre-existing ACME error type or a new one like \"accountDoesNotExist\")?\n404 talks about the effective request URI, so that's not going to work. I think that a 400 would be better on the basis that the request checked, but failed (NAME suggestion is a fine enhancement). The reason you make this request is to check; and having a 4xx series code is easier to test than the absence of a header field.\nThanks NAME and NAME implemented this in the latest commit.\nAs discussed; this is fine."} {"_id":"q-en-acme-0355d49974cb216d1557a7d3ddc85dea09a806057e8a2261ccf7fe52f13d56df","text":"It makes sense to cancel any pending cert orders when an account is deactivated, where the cert has not been issued yet."} {"_id":"q-en-acme-b3d4ef6c46e9ba8e55903c2a697f116c66928f3be01f494a10876183a534a925","text":"Previously there was some notion of being able to hint to the server whether you wanted to reuse authorizations. This was removed from the spec but the pre-authorization flow still made a reference to it. This commit removes that obsolete reference. Thanks to NAME for pointing out the error. Resolves URL\nHi in the section on pre-authz there is this sentence: To request authorization for an identifier, the client sends a POST request to the new-authorization resource specifying the identifier for which authorization is being requested and how the server should behave with respect to existing authorizations for this identifier. the part \"and how the server should behave with respect to existing authorizations for this identifier\" probably is obsoleted\nHi NAME I submitted URL to fix this. Thanks for spotting & reporting :-)"} {"_id":"q-en-acme-341338e4e4e20aebbd9ea0aa38a10862fb2c81177776a18f1b65c9423cc915a5","text":"I think without OOB all challenges use key-authorization \"JWA\" is not defined in this document. I think it can just be omitted\nNAME Thank you for the change! :smile:\nThanks again NAME I really appreciate your editing & the accompanying PRs! :-)"} {"_id":"q-en-acme-b2044467e584844d550510713d8cacc11307c391e6dc488ea70088e1a4572e71","text":"Prior to this commit the literal string \".\" was being rendered as the start of italics in the generated HTML doc. This commit updates the markup to escape the \"*.\"` in the generated document. Originally reported by Joel Sing - thanks!"} {"_id":"q-en-acme-e5bcb8df7267c7409ebef4adc3849698357b1f83cd17bf6815a563461ced82ef","text":"In Section 7.4.1 \"Pre-Authorization\" the spec says: If a CA wishes to allow pre-authorization within ACME, it can offer a \"new authorization\" resource in its directory by adding the field \"newAuthz\" with a URL for the new authorization resource. That text indicates that the CA may wish to not support pre-authorization in which case the \"newAuthz\" resource will not be present in the directory. One example of an ACME CA choosing to not support pre-authorization is Let's Encrypt. This commit clarifies the \"newAuthz\" field of the directory is optional in Section 7.1.1 \"Directory\" where a developer is likely to look to understand which resources must be supported/present in the directory and which are optional (\"meta\", \"newAuthz\"). If the server doesn't support pre-authorization it MUST omit the \"newAuthz\" field."} {"_id":"q-en-acme-a5bff19cc161777fcec4095bb5f97c84d2985b8c9c7030d363b28b3d416a51e7","text":"In addition to adding the error, as proposed in , this PR fixes some issues discussed in the F2F meeting, namely moving ACME URNs under the IETF parameters space, and having errors be a sub-namespace of an overall ACME namespace."} {"_id":"q-en-acme-b7cecd46cefcba928190e82b9fae57a63d01bc281075d58725eff558b3361a35","text":"URL\nThanks NAME On Sat, Sep 15, 2018 at 08:51:55PM -0500, Benjamin Kaduk wrote: My apologies if I missed it when it went by, but did we ever hear more about this requirement from the proponents of externalAccountBinding? Thanks, Benjamin"} {"_id":"q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805","text":"Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:"} {"_id":"q-en-acme-ba2984ce9ab3724ed14c7a07306fbc07dc630e4844d969fa17b4d1aa73a161aa","text":"On the mailing list, NAME and NAME pointed out some risks arising from the default virtual host behavior of some web servers. This PR integrates the changes they suggest, namely (1) removing the \"tls\" option from the challenge, and (2) adding iterations to the challenge. This PR is based on top of and .\nSorry, feel free to tell me to go away, but reading the backing decision to drop HTTPS for SimpleHTTP doesn't make sense to me. If I understand everything correctly, you control the originator of the HTTPS requests and the requests are always sent to domains. So why can't you just enforce to always send the Host header and this wouldn't be an issue?\nFrom : behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). As far as I know Apache does the exact same thing for nonencrypted HTTP hosts (this is also acknowledged by a later mail in the thread), so the reasoning is a little weird. I suppose the idea is that hosters are more likely to have HTTP configured correctly rather than HTTPS?\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks! Apache (and I gather nginx too) have the subtle and non-intuitive behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). URL URL This creates a vulnerability for SimpleHTTP and DVSNI in any multiple-tenant virtual hosting environment that failed to explicitly select a default vhost [1]. The vulnerability allows one tenant (typically one with the alphabetically lowest domain name -- a situation that may be easy for an attacker to arrange) to obtain certificates for other tenants. The exact conditions for the vulerability vary: SimpleHTTP is vulnerable if no default vhost is set explicitly, the server accepts the \"tls\": true parameter as currently specified, and the domains served over TLS/HTTPS are a strict subset of those served over HTTP. DVSNI is vulnerable if no default vhost is set explicitly, and the hosting environment provides a way for tenants to upload arbitrary certificates to be served on their vhost. James Kasten and Alex Halderman have done great work proposing a fix for DVSNI: URL That fix also has some significant performance benefits on systems with many virtual hosts, and the resulting challenge type should perhaps be called DVSNI2. The simplest fix for SimpleHTTP is to remove the \"tls\" option from the client's SimpleHTTP response, and require that the challenge be completed on port 80. URL Alternatively, the protocol could allow the server to specify which ports it will perform SimpleHTTP and SimpleHTTPS requests on; HTTPS on port 443 SHOULD be avoided unless the CA can confirm that the server has an explicitly configured default vhost. The above diffs are currently against Let's Encrypt's version of the ACME spec, we'll be happy to rebase them against the IETF version upon request. [1] though I haven't confirmed it, it's possible that sites that attempted to set a default vhost are vulernable if they followed advice like this when doing so: URL ; an attacker need only sign up with a lexicographically lower name on that hosting platform to obtain control of the default. Peter Eckersley EMAIL Chief Computer Scientist Tel +1 415 436 9333 x131 Electronic Frontier Foundation Fax +1 415 436 9993"} {"_id":"q-en-anima-brski-prm-07432318f6592df19cc58130a3c24601512886fb4523a2e9077277298daf5838","text":"Included in the text. Maybe will include further text for\nIn 5.5.1: This kind of suggests that an Agent may also not know the serial number. But in creating the agent-signed-data, this number is required later on. Could we clarify perhaps the Agent MUST know the serial-number; e.g. in one of these ways: configured (as in quoted text above) discovered (in \"standard\" way using mDNS / DNS-SD first part of service instance name) discovered (in vendor-specific way e.g. RF beacons) other (in vendor-specific way, e.g. scan QR code of packaging) - maybe that's the same as category 1 from the Agent point of view.\nAdditional Note: figure 4 doesn't show the part where the Agent discovers the serial number of the Pledge. This was adding to the confusion. Maybe figure 4 can stay as is; this is just to clarify that at the start of Fig 4 the Agent MUST know the serial-number. So some small link is missing to get the complete picture.\nI reopened as the initial point if the registrar-agent MUST know the serial number is not solved by the QR code reference. As the registrar-agent needs to know, which pledges he is bootstrapping, he needs the list of serial numbers in advance to be able to construct the agent signed data. Text proposed will be included into the pre-conditions described in section 5.5.1, In addition, the registrar-agent MUST know the product-serial-number(s) of the pledge(s) to be bootstrapped. The registrar-agent MAY be provided with the product-serial-number in different ways: configured, e.g., as a list of pledges to be bootstrapped via QR code scanning discovered by using standard approaches like mDNS as described in 5.4.2. discovered by using a vendor specific approach, e.g., RF beacons\nOk"} {"_id":"q-en-api-drafts-4bf2e8e113cb189771011164dc2f8fe5730df030834a9aa972d3b8f004989f42","text":"It seems blindingly obvious to me that an implementation would need to take care of this... and this note of caution seems quite appropriate (if unnecessary... but surely it doesn't hurt).\nDynamic framer add and passthrough (last two paragraphs of section 6.1) assume synchronous operation; what happens to bytes in the send/receive buffers when the framer stack changes? There either needs to be an explicit mutex here (allowing the framer/implementation to lock the buffers at the exact point the framer change happens, not allowing any bytes to be processed while the dynamic framer decision is made), or some way to annotate in a buffer the replay point after which a dynamic reframing is necessary. Are we going to leave this up to implementors to figure out, or should we point out that here be dragons?\nWe should clarify that these operations within the data path of the stack should be synchronized."} {"_id":"q-en-api-drafts-9ce4403a8d8791280bf4d897ab6b6c60d06149002a9cd899cfe4f38e3b51b6f3","text":"SORRY I made a PR on your PR, when I meant to leave a review comment, see 1024. .... Silly to make a PR on a PR - I added a review comment instead...\nIn section 4.1.4 in the impl draft, we have: \"The order of operations for branching, where lower numbers are acted upon first, should be: Alternate Paths Protocol Options Derived Endpoints\" However, I would say there is not really a strict ordering required. The text actually indicates that selection of the protocol is rather independent while there is usually a dependency between the interface and derived endpoint. I think this is related to my other issue about the use of the term path, because a path consists of both the local and remote endpoint and therefore there is of course a dependency. So in summary I'm not sure how useful I find this whole section as currently written...\nWe should conclude if anything more is needed here besides cleaning up the terminology, where Alternate Paths is not a good term - see .\nAny implementation should have a consistent order of operations. This is a recommended example. Reference the arch text that says it needs to work the same way each time.\nReference ICE/HE"} {"_id":"q-en-api-drafts-6e463665b6451b890ec684b00861b97ed863c46948a8a1ef38859e6f56418928","text":"NAME wrote: \"Why is the text talking about \"many properties\" that can no longer be changed, instead of relying on the earlier, nice and clear distinction between Selection and Connection Properties?\". => In line with the suggestion from NAME this is now changed, it says Selection Properties. NAME wrote: \"Where earlier, Selection Properties were said to remain readable, now they are only queryable?\". => In line with NAME answer, this PR replaces \"query\" with \"read\" in this text. NAME wrote: \"Are \"applied\" Selection Properties still Selection Properties? Shouldn't they better be referred to using a different term?\" and here, NAME suggested: \"In the , we define Protocol Stack as a Transport Services implementation concept, and we talk about a certain protocol stacks providing certain features (see . Perhaps we could rephrase this part of the API draft to use that terminology, e.g., the application can query the actually used protocol stack and then know which Transport Features it supports.\" => I don't know how to apply this change, and it generally sounds a bit complicated to me. I think that Selection Properties should keep their name. Regarding the confusing text with the example, I removed this and I think it makes things clearer. Regarding the return type, I think I now finally understood the bigger problem. Returning a Boolean instead of a Preference should make sense and should be easy to implement (as also described in this PR). The problem is that not all Selection Properties are of type Preference. E.g., is an enumeration, representing , or . What sense does it make if this becomes only or ? Similarly, and are of type \"Collection of (Preference, Enumeration)\", and indeed it would make sense to expect a Collection of booleans, not a singular boolean. This PR now explicitly says that, upon reading, the Preference type of a Selection Property (not the whole Selection Property!) changes into Boolean.\nMANY thanks for the proposed fix (which I committed)! I hadn't really understood the problem just from reading the issue discussion.\nAs , here follows a bit of feedback on the different Property types in the Interface draft, from the perspective of an academic developer who is currently attempting to follow the draft spec as closely as possible while trying to implement a path-aware transport system in a In , it says: At this point, the reader of the document could be expected to conclude the following: Selection Properties are a set of preferences for protocol/path selection that (obviously) can no longer be meaningfully changed once a connection has been established. Connection Properties are a set of \"run-time preferences\" for connections which could still be adjusted when a connection is already established, but should ideally already be known during protocol/path selection The text goes on to state: With this wording, the reader can now reasonably conclude that the original Selection Properties that have resulted in a given Connection should remain available during the lifetime of that Connection, perhaps in form of a copy of the original data structure. So far so straight forward. In however, this conclusion is called into question. After the text has gone on to introduce the 5 different Preference Levels, it states At this point, the reader becomes confused. Why is the text talking about \"many properties\" that can no longer be changed, instead of relying on the earlier, nice and clear distinction between Selection and Connection Properties? Where earlier, Selection Properties were said to remain readable, now they are only queryable? The type of a Selection Property that has been \"applied\" mutates from a Preference Level to become a Boolean? Are \"applied\" Selection Properties still Selection Properties? Shouldn't they better be referred to using a different term? So the original Selection Preferences that resulted in the current Connection are not available after all? Only the (Boolean) Properties of the chosen Protocol can be \"queried\"? But wait, the text goes on: This is a complicated way of saying that the values only reflect the actual Properties of the chosen Connection and (in direct contradiction to the previous paragraph) do not actually indicate whether a Selection Property has also been \"applied\" or not. (A Boolean only carries exactly one bit of information after all.) In other words, the Protocol Properties can be queried. The usefulness of this is probably limited however. If we already know that our Connection is using, e.g., TCP, we don't have to issue any \"queries\" to know that the Connection offers reliable transport, is subject to congestion control and preserves packet ordering. (This is indeed how I have chosen to implement this. The chosen Protocol can be queried for the Properties it inherently has. In the Preconnection phase, the Selection Preferences are matched against these and any / conflict results in that Protocol being dropped from consideration.) Hoping not to be unreasonable (or somehow block-headed), I have the following suggestions Either introduce a new term for Properties that have been \"applied\", or Use the term \"Preferences\" (e.g., \"Selection Preferences\", \"Connection Preferences\") to refer to a collection of adjustable \"Settings\" and the term \"Properties\" to refer to measured or queried values that are actual Properties of underlying Connections and Protocols Go into a bit more detail about what the draft really means when talking about \"querying\" Properties (versus setting them), maybe even in terms of exemplary interactions with Property data structures. (After all, the draft elsewhere goes into considerable detail with it's exemplary Event-based system) Think about whether it in general would clear things up when the explicit notion of Protocol Properties as a (static) counterpoint to (adjustable) Selection Properties is introduced\n(Replying as an individual and co-author) Thanks for raising this issue! Yes, this matches my understanding is well. If the later text confuses this issue for the reader, then perhaps we'll need to tweak it. I'd say that Selection Properties can no longer be changed, Connection Properties still can be changed (though maybe changing a Connection Property later would not have any effect, e.g., if Capacity Profile changes but was only taken into account when selecting a path). Perhaps the text should reflect that. For me, these are synonyms, whereby a \"query\" is what the application has to do in order to read the properties. Perhaps the \"queried\" should become \"read\" to avoid confusion. That is indeed how I think of it on a conceptual level, but I understand how this might confuse implementers, especially if they can't just change a data type of a property. In the , we define Protocol Stack as a Transport Services implementation concept, and we talk about a certain protocol stacks providing certain features (see . Perhaps we could rephrase this part of the API draft to use that terminology, e.g., the application can query the actually used protocol stack and then know which Transport Features it supports. I would strongly prefer using our existing terminology to introducing yet another term. Good question. I vaguely remember discussing this, and I think folks felt like querying the Selection Properties that the application had set was not very useful at that stage. The other text you quoted indeed sounds a bit complicated and maybe it can be reworded to be clearer. Curious to hear what my co-authors and/or other TAPS participants think!\nI completely second what NAME wrote. With regards to querying the Transport Features selected vs. getting the preferences provided earlier, I remember we had lengthy discussions and somewhat converged to the current state. Primary arguments ware Multiple read/query variants are confusing Original Preferences are not really useful - if you really want to read them later, keep the Preferences Object Constrained implementations don't want to be forced to store them. NAME based on these arguments, what option would you prefer a) different get/query operations of selection properties after the connection has been set up b) the current way of \"mutating\" preferences with clearer description in the text\nJust to add: having finally gotten around to carefully reading this, I also completely agree with everything NAME wrote. This is exactly how I would have answered, and her suggestions are exactly what I think should be done to address these comments. I also agree with NAME that these were the arguments regarding querying the querying issue, and I agree with the arguments as well. The quoted text containing the example is a great catch! Such a terribly convoluted paragraph. I have a nagging feeling that this was my fault! :-(\nPR tries to solve this; the PR itself contains a full explanation of my rationale for the changes. Please take a look!\nGreat, this looks good to me now. I think now it is simple enough that we don't need an example anymore.Looks good, one clarification suggestedWorks for me (with NAME changes). I would propose a slight rewording -> and add a hint that this exposing can also be implemented differently."} {"_id":"q-en-api-drafts-da7626fc03fbcaf1ca6439281a420b40331778b283d2750369148088274466c5","text":"Appendix B.1 lists two properties as \"under discussion\": Bounds on Send or Receive Rate: (PR 1036 fixes the name): this is no longer under discussion, these properties exist. Cost Preferences: this is no longer under discussion, I suppose?? We don't have this in the interface draft. So I have the impression that this should no longer be an appendix, the text on Cost Preferences should be removed, and the text on Bounds on Send and Receive Rate should be moved elsewhere, and not be mentioned as \"under discussion\".\nLGMT"} {"_id":"q-en-api-drafts-10bd83f54c46e4cb5e585dd57962e1152a99429a7f419a2a37f075865f38cd07","text":"Capitalization seems to be a problem for us. We need to make things uniform. At first, I reacted to seeing \"connection\" as parameters in this PR, because we use the term \"Connection\" to refer to a TAPS-provided connection (as opposed to a connection of the underlying system), and so, de-capitalizing the term here may look better, but it is also somewhat imprecise. This said, we do the same with other capitalized terms too, here and in the API draft - e.g., \"messageContext\". Should we generally agree to begin terms with a lowercase letter when they are used as parameters in code examples? I think we have this in most places, though at least the API draft has several counterexamples... e.g., the server example code: Preconnection := NewPreconnection(LocalSpecifier, TransportProperties, SecurityParameters) (..) Listener -> ConnectionReceived\nFrom discussion on\nThis was a mistake. As I wanted to do this now, I noticed that all occurrences of capitalized terms as parameters are objects that are defined and used elsewhere in the code examples. For example: Preconnection := NewPreconnection(RemoteSpecifier, TransportProperties, SecurityParameters) refers to RemoteSpecifier etc., being objects that are initialized some lines above. Making them appear in lower case everywhere would be awkward, this is then a much more drastic change to our pseudocode. As a result, I think it's fine to leave things as they are, and I suggest to close this issue with no action.\nFramers: What is the difference between the Final message property and the separate IsEndOfMessage event argument?\nAlso there, the term MessageContext could use a definition\nand searching \"isEndOfMessage\" tells me that there is no text describing this parameter. Regarding the question, the difference should be clear: seems self-explanatory, whereas indicates that the Message carrying it is the last one of the Connection. So, this is End of the Message vs. End of the Connection.\nSounds like this needs a reference to the API doc, and change IsEndOfMessage to endOfMessage to match the API.\nProbably fix with\nI think this looks ok (and thanks for doing this!), but see also my comment about making things uniform wrt. capitalization: if we merge this PR, we should also do one that does the same with the API draft.Thanks LGTM! Although I noticed another nit that can perhaps be fixed in the same PR even if not directly related to the parameter names. Regarding the capitalization I think we can merge this PR and then have a separate issue for the capitalization as we will need to fix this either way."} {"_id":"q-en-api-drafts-46cdc90734579a8a2adf16b7fa71b6c5fa2f013ea245feb03e707006e8edff02","text":"Takes care of item 5 and .\nSee discussion on\nProvide a complete template that covers all functions/properties from the API. Indicate how the template should be used (e.g. leave out the parts that are not relevant for a particular protocol or include all fields and mark fields that are not relevant as such.\nI'm wondering if this is really something that's in scope that we should cover. We do have the message properties section that explains how to implement various properties, but I think adding each property to every protocol mapping will be a huge amount of text that will be difficult to get exactly right.\nI agree, we have added additional properties since we opened this issue to check the mapping and make sure we have a complete coverage. We should still be consistent in how we treat things though, and perhaps add a sentence that we do not describe all the properties for instance. I have made a pass through to check the mapping from the API to impl. draft for all the protocol mappings as a basis for making things consistent. Here is the summary. We cover the same actions and events for all the protocols with the exceptions that UDP Multicast does not cover the ConnectionError event and SCTP covers the Priority connection property. And MPTCP and UDP-lite are special as they refer back to the base protocols. There is one terminology inconsistency where the impl. draft refers to InitiateError and this has been generalized to EstablishmentError in the API Unless I missed something, there are only two actions that can be made on a connection that are not covered, addRemote and removeRemote. This is probably fine as they do not mean much for most of the protocols (or we could add them to be complete). We should perhaps mention how they relate to MPTCP though if we leave them out. We also do not cover the Preconnection.Rendezvous action. Again this may be fine as it is a bit special (or we can mention it for consistency.) For the events, we cover a subset with main focus on the connection related events. The Closed Event and some events related to connection setup are missing. As we cover ConnectionError we should perhaps cover the Closed event as well? We do not cover any of the Send or Receive events which seems fine. In relation to Connection and Message properties, we do not cover any of them except for the Connection priority covered for SCTP and we talk about some properties in connection to UDP-lite and MPTCP. SCTP also has support for partial reliability and unordered messages so not sure why the priority part was singled out and added. NAME any particular reasoning behind this part? Possibly we should remove the priority part for SCTP or discuss it in running text in a similar fashion as for UDP-lite and MPTCP We may want to update the UDP-lite description to also say something about what properties it supports over UDP. The current description sounds like it is a subset. Once we agree on how to deal with the bullets above, only smaller modifications should be needed to the text.\nConnectionError should be added; SCTP priority should be removed Fix the inconsistency Add addRemote and removeRemote text related to MPTCP No action Mention Closed Event See item 1, no further action Update UDP-Lite description"} {"_id":"q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f","text":"We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well."} {"_id":"q-en-api-drafts-222d1c53071e27fb4d52bb4eb4c9fb9ce874c578cc5875a4d75c0ce18ccc7b98","text":"URL Assuming there would be more protocols in the future and the minset functionalities will expand via extension or amendments. It would important to indicate if this architecture should is also extensible or it only supports on the minimum set referred. Definitely we haven't seen the future yet, but it would be important to express if we have considered such expansion of minimum functionalities.\nTo me, the current spec is based on minset that describes the common functions. So, the addition I would expect here is to say that there can be extensions and protocol-specific features.\nI expect that if a common feature is added to the minset, then the architecture and API surface would also be extended to match.\nLGTM modulo a nit"} {"_id":"q-en-api-drafts-933c327b5fed1349e7192b6ed58018f62f9a9307318049554db0f24e012c7ad4","text":"URL Unless there is a description the figure, with directed and non-directed lines it is very hard to interpret the figure. I would recommend to at least add some reading guidance for the lines.\nLGTM, thanks!Looks fine."} {"_id":"q-en-api-drafts-28dd9bd48f4d0a96f92a6def589131ea5a00af8b586523054b7d13b23c2ce816","text":"(sorry that it took me so long) This uses some text that is also in the interface draft.\nThis I would like us to discuss. Well I know we have discussed this before. As I am reading the Interface document after a long while and having read it along with the architecture document, I kind of strongly feel that the architecture document lack of description on framer and the states. These are mentioned and pictured in the architecture document but haven't been clear there. Hence, I kind of feel that we need to move this two sections to architecture document. Please discuss so that I can understand the reasoning of not doing so.\nRegarding section 9.1.2, the framer description: I think it's good to be clear and complete in the API document, but I agree that framers should be appropriately described in the architecture document. I have the impression that the text in the -arch document on this matter may indeed be too short, while the text in the -interface document may be too long (although a little bit of repetition surely won't hurt here, it's good for this document to be understandable and clear on its own). Maybe some of it could be moved to -arch, or repeated there? Regarding section 11, Connection states: to me, this is fine as it is. Section 4.1 / figure 4 in the architecture document make state transitions pretty clear - and section 11 / figure 2 are just a more fine-grain look at these states as a way to explain exactly which Events will occur when. This seems needed in the API document, where all other Events and Actions are defined, and I would think that this is at a too detailed / fine-grain level for the architecture document.\nNAME after giving it some more thoughts, I think your suggested approach should be fine , meaning we put more description of framer in the architecture document and leave the connection states to the Interface document.\nNAME did you want to take a stab at this?\nLooks great!"} {"_id":"q-en-api-drafts-69511599c0601aaa3030ceaf26a47984653fc749ae42f79c3bdc324a3fc0bc17","text":"Feel free to approve or propose changes to improve this sentence in Arch.\nBT: \"I’m not convinced of the value of the glossary of terms, as it forwardrefs basically the entire document, and some of the definitions are so generic as to be potentially confusing (e.g. Event), so I think putting these up front reduces the document’s approachability and readability. I understand that some people like these (and other standards orgs make them mandatory, hi ETSI), so I won’t fight it.\" \"Adding a sentence making it clear it’s safe to skip the glossary if you’re going to read the rest of the document would address my approachability concern completely. \""} {"_id":"q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233","text":"This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm… that comes surprising – NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections – despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones…\n\nAehm… that comes surprising – NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 – \"Setting and Querying of Connection Properties\" – the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. – please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in – we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it."} {"_id":"q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578","text":"This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see."} {"_id":"q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8","text":"Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM"} {"_id":"q-en-api-drafts-ec7a9b561ba84224064766c6b0dbd3614242118d36cb0c23d81a9d4c213d8adf","text":"Thanks for these fixes! I did a bit of reworking of the abstract, but it keeps the main part of the changes you made.\nRegarding the sentence you removed, that was an adaption of an existing sentence, which I believe aimed to mention the words implementation and API to implicitly refere to the other docs (which was probably too subtle for anybody to notice anyway). However, the other change I made to that sentence was to mention one of the benefits / deployment incentives for taps (dynamic protocol and path selection). I wasn't sure about the wording here, but I think it would be good to (re-)add a sentence to the abstract saying why taps is so great and that everybody should deploy it.\nYes, the original second sentence was aimed at mentioning that there were two other documents, API and Implementation, that would be based on this one. I liked how in your revision, it did try to make things clearer than just saying \"flexible transport networking services\", but I felt that the specific example of dynamic path and protocol selection was a bit too specific for that one line. (I was chatting with NAME about this as I was editing, btw). My thought was that the list of features that you added in right after that, which go through the top-level bullet points of the advantages, do a more comprehensive job of summarizing the benefit."} {"_id":"q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117","text":"Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this – looks good! just needs s/send-params/msg-properties/ and merging"} {"_id":"q-en-api-drafts-0765f7f0c3e5c0a7ac1ae19d2ccf547bc6b6012454020fc159769bf842cac238","text":"Closed\nI just read Arch and have the following NiTs which I think we should address: /Listeners by default register with multiple paths/ ought this to be: /Listeners, by default, register with multiple paths/ /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over a transport protocol. / I think this should be (because I don’t think they are protocol specific) /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over the transport service. / /to retransmit in the event of packet loss,/ better to avoid event here :-) /to retransmit when there is packet loss,/\nI am just gonna add a few more nits here: The Listener object in figure 4 is a bit awkward, it appears as if the Listen() call appears out of thin air while per the API it gets called on the preconnection object like Initiate() and Rendezvous(). In section 4.1.1, there are 5 references to section 4.1.2, is it really necessary to have that many? Point 2 in section 4.2.3 only talks about required transport services, the same should probably be said about prohibited services, i.e. \"both protocol stacks MUST NOT offer transport services prohibited by the application\"\nLet me add to this: Section 1: print send() and recv() in fixed-with font [HTML/XML issue only] Section 2 - Figure 1: socket() [with the brackets] looks odd in the figure title\nNAME I fear framers may be specific to the underlaying protocol, e.g., for SIP slicing messages out of TCP stream whereby doing nothing for SCTP and UDP\nIn relation to frames I think I only asked to add /service/ instead of protocol... I don't understand how Phil's comment reacts to that\nNAME I think protocol is correct here and would not change the sentence.\nOK - so I think you are saying the framer depends only on the specific next protocol, rather than upon the transport service that was offered? - It is still not clear to me that this is correct.\nNAME that is a fairly good question… The behaviour of the framer defiantly depends on the transport service provided (which may be a subset or superset of the transport service requested). A framer needs to support all possible outcomes of the transport services requested It may optimize based on the protocols it actually finds, e.g., turn into a no-op with the transport preserves message boundaries. Whether the next protocol is a sufficient representation of this is unclear to me. Maybe not, if one thinks of supporting arbitrarily large messages and UDP over IPv6 with and without fragmentation…\nLGTM, thanks!"} {"_id":"q-en-api-drafts-9a75929c97768b468224c5679037be7ee5dacc840e2019142921f8df24a0c53a","text":"Just some points about the new text on framers in the API draft: In 10.2 it is specified in which order multiple framers run on incoming and outgoing messages, it is however never specified what the order looks like on Start and Stop events. In 10.6, \"MessageFramer.DeliverAndAdvanceReceiveCursor\" appears to be missing the \"Data\" argument that is present in \"MessageFramer.Deliver\". Also in 10.6, in the second paragraph it says: When it is probably meant to say While the framer implementation is able to notify the Connection that it has finished handling a Start event (by calling MessageFramer.MakeConnectionReady), there is no such way for Stop events.\nAs notes for the responses to these points: If there are multiple framers, the \"lowest\" one (the one closest to the transport) would receive start first, and the next would get the start event when the lower one is ready. Similar for stop. Good calls on the missing data argument and the typo re application vs framer. I was thinking to use the failConnection call to complete a stop, potentially with no error?"} {"_id":"q-en-api-drafts-237a6a86d8a1759b703a733f620c5a530cb291d279329d7b7bb4e13a6ef91f0f","text":"Addresses\nThe architecture draft uses UDP checksums as an example of a protocol-specific property in Section 3.2 and 4.1.2. I find this a bit confusing because in the Interface draft, checksum coverage is a Generic Connection Property (Section 11.1.2). Should we maybe use a different example of a protocol-specific property, such as the TCP User Timeout we have in the Interface draft?\nSure, we can switch the example around\nThanks, LGTM!Thanks – works"} {"_id":"q-en-api-drafts-4acc396139ca62ede4174a04e01d929a99b0073c8495327e071c2846c7b180fc","text":"Took a stab at normative language for the rules to process Selection Properties, as discussed in . As far as I remember, we always said that outcomes have to be deterministic/consistent, so I made everything that matters a MUST. I guess our example illustrates that even given the same Selection Properties, the outcome might change depending on the situation, e.g., your path might still be Wifi but it might or might not support your preferred protocol. So, does this really have to be all MUST, or is there a SHOULD in there? Discuss. :) Also, if anyone can name a good reason why the outcome has to be deterministic, please suggest text - maybe something like \"avoid surprises for the application\", but this didn't really sound right, so I left it out for now.\nIn section 5.2. in the API doc: \"Internally, the transport system will first exclude all protocols and paths that match a Prohibit, then exclude all protocols and paths that do not match a Require, then sort candidates according to Preferred properties, and then use Avoided properties as a tiebreaker. Selection Properties that select paths take preference over those that select protocols. For example, if an application indicates a preference for a specific path by specifying an interface, but also a preference for a protocol not available on this path, the transport system will try the path first, ignoring the preference.\" This sounds rather like an example algorithm that should maybe go into the implementation doc instead? Or is there any part of this that we need to specify normatively in order to ensure that all systems implement the same?\nThis goes back to issue on \"conflicting policies\". At the time, we decided that there must be a deterministic outcome, therefore, text was added to both the API and the Implementation document, see . If we keep it that way, perhaps the text here should point out that the goal is to resolve conflicting policies, so we achieve a deterministic outcome.\nAh okay, thanks for the background info; missed that. I think we need to use normative language here then. Maybe also a separate subsection makes this more prominent.\nThis reads to me like the rules for processing polices rather than an example algorithm. I think it is part of the API... Normative language would be good and a stronger opening that explains why the rules exist.\nI don't think a separate subsection would make this a better read - this fits fine as it stands, as an intro to the transport property specification IMO. Regarding normative language, that's a separate discussion. So, can we close this issue?\nAgree that we don't need a separate subsection. I can try to tweak the beginning of this part a bit to explain why the rules exist as NAME suggested. Regarding normative language - we definitely have it in the API draft, right? So why not have the discussion on it here as well? if we're serious about deterministic outcomes, I guess the whole thing needs to be a MUST, right?\nSorry, true, that normative language debate was about the arch draft, and we definitely have it in the API draft. So yes, let's discuss it here. I also think a MUST would make sense.\nSo the order of filtering out require/prohibit is not important (outcome is the same) - making the order a MUST does not make sense. Also, first sorting by preference and then filtering has the same outcome as doing it the other way around. the only thing we need to say is that filtering MUST happen and sorting SHOULD happen.\nok - I have to admit I didn't think this through, I just agree that we want a deterministic outcome from this.\nWhy did we require a deterministic behavior in first place? Does this prevent TAPS implementations from running experiments like Apple did with TCP SACK and ECN? Does this prevent probabilistic path probing?\nThis was discussed in the TAPS BoF? ... I thought it was about developers being assured that X will happen if they choose Y and not finding the ground shifting as they use the interface.\nInterim: Prefers and avoids allow the implementation to decide the ordering and weighting; require and prohibit are the only hard requirements. Applications should be able to cope with any possibility that isn't prohibited. Likely should add implementation text to say how preferences weight racing.\nDuring the interim discussion, we found that was too strict and limiting, so we opted for another approach that moves the selection algorithm to the Implementation draft. I'm preparing another PR implementing these changes.\nThe text that ended up in the implementation draft is not fully consistent with the racing text, opening a new issue for this and closing this issue.\nLGTM, merging this. thanks, Theresa!This reads good to me. Regarding the request for text, I don't have any ideas... but I think this could be fine as it stands? Needs more eyes though."} {"_id":"q-en-api-drafts-ec1775cfb1baf29d943c556da9293d5492e9d82de7835d85bc89b54edeab871c","text":"This ended up in more text that I thought. I'm not certain if we need all of it. Further I'm unsure about the concrete wording in many cases, so any comments are more than welcome!\nThis is supposed to address issue\nI do mentioned name resolution, however, leakage though DNS might rather be a DNS problem and not necessarily an issue that needs to be considered in taps. Sharing cache state is a good point. I vaguely remember that we already address that somewhere else (could also be in the implementation draft as this is rather an implementation issue). But maybe we can add a pointer...?\nLeakage though DNS may need to be addressed by TAPS, as with the current sockets API, the application has complete control about when DNS resolution is done (and, at least as long as things like DoH/DoT are predominantly implemented by nonsystem resolver libraries, how). TAPS as envisioned gives the system more control over resolution.\n(IIRC the architecture does explicitly address and mention the shared cache issue; adding a line to refer to that in privacy considerations might be nice but isn't IMO necessary.)\nGorry, I addressed all your nits/comments.\nI think it is a good start, but I miss two aspects: Information leakage through DNS, which can also make addresses on different paths linkable Information leakage within an application though shared caches, e.g., within a browser. I agree this is a good starting point; note that I took the liberty to directly fix some nits (such as capitalizing TAPS) and typos; these never change the intended meaning."} {"_id":"q-en-api-drafts-05d343e85ac17c8b30dd03a0c45c0c4239b9fdedd6bd7aa9c641cf991951eea0","text":"Would be nice if someone has time to do some ascii art :-)\nWouldn't that be fig. 4 in the architecture draft? Except that, looking at this now, I see inconsistencies! E.g., I don't see \"RendezvousDone\" in the figure. Also, instead of ConnectionReceived, it says \"Connection Received\" - rather, this should probably marked as an event by writing it as \"ConnectionReceived<>\". The figure is also missing InitiateWithSend() ... probably it would be good to make this figure both complete and consistent with section 13, \"Connection State and Ordering of Operations and Events\", of the API draft? I don't think very much is missing there.\nI think it would be nice to have a real state diagram here, probably with states like \"closed\", \"open\" and \"listen\" and the events that create a transition between these state.\nI'm not sure that Figure 4 in the Arch diagram serves the purpose that Mirja's trying to address with the state diagram in API. I also don't think we need FIgure 4 to align with the specific naming and CamelCasingConventions in API.\nThanks! I guess you could add a loop for sent<> and received<> in established state but maybe that also too much.Works for me"} {"_id":"q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7","text":"This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass—we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?"} {"_id":"q-en-api-drafts-7f04ed21c99e66692b18386351b36800bb3241cee4cb5d1176292a48f7a1ee9b","text":"In the API doc sec 5 says. \"Message Framers (see Section 10), if required, should be added to the Preconnection during pre-establishment.\" Should this be SHOULD or even MUST? If SHOULD is correct I think we need to say slightly more about when it makes sense to add a framer later.\nI think it should be \"can be added...\"; not normative, just descriptive of fact.\nCan we start with the \"If' perhaps... \"If messages frames are to be used, they MUST be added to the Preconnection during pre-establishment.\"?"} {"_id":"q-en-api-drafts-093e1972f003e145651fcd682bd41f51a410f41edffdab70f35398a0a688d91e","text":"I actually really like: \"Simultaneous\" and \"Staggered\" as descriptive terms that avoid the question of \"delaying\".\nOpening this to follow up on a comment by NAME on that is now closed. \"Simultaneous\" and \"Staggered\" would be better names than \"Immediate\" and \"Delayed\". Just from these terms, I would normally think that \"Immediate\" means to start racing right away, whereas \"Delayed\" means that nothing would happen for some time. So these two terms are slightly misleading in my opinion.)\nI agree this would be an improvement.\nIn Section 4.2.1., I didn't really understand what \"immediate\" means from the text. I think it just needs one sentence.\nlgtm (side note - nothing blocking, this is just an idea: to me, \"Simultaneous\" and \"Staggered\" would be better names than \"Immediate\" and \"Delayed\". Just from these terms, I would normally think that \"Immediate\" means to start racing right away, whereas \"Delayed\" means that nothing would happen for some time. So these two terms are slightly misleading in my opinion.)Looks good, thanks! I will close this and open a separate issue for the naming."} {"_id":"q-en-api-drafts-db8e680be01ef465b7303217c9a5697ec6e9b089fba59e7139e79197e95a852a","text":"NAME 7df2d9c should address this\nTo we need multiple local and remote candidates. Should this be represented as either: 1) taking two lists of objects; or 2) as two objects each with multiple addresses, ports, etc. I suggest option 1, since this makes it easier to resolve a to multiple preconnections, for NAT binding discovery. Relates to but with also different types of endpoint.\nI like option 1 because it contains complexity.\nSee this email thread: URL Copy + pasting a relevant part from an email from NAME To me at least, it wasn't clear from the text that rendezvous() would kick off ICE-style probing. It's kind of obvious (when else would it happen?), but I think it would be good for the draft to spell this out.\nmodulo the do-we-want-to-bind-tightly-to-ICE question... Dear all, I know way too little about rendezvous mechanisms, so I may get this completely wrong - please bear with me! It may be that I describe an issue that’s a non-issue. But, if I do understand this right, maybe that’s something which needs fixing? Ok, here it goes… Our current Rendezvous text in the API draft says the following: \"The Rendezvous() Action causes the Preconnection to listen on the Local Endpoint for an incoming Connection from the Remote Endpoint, while simultaneously trying to establish a Connection from the Local Endpoint to the Remote Endpoint. This corresponds to a TCP simultaneous open, for example.” Now, if an application chooses unreliable data transmission, it may get SCTP with partial reliability, or it may (in the future) get QUIC with datagrams, or whatever… and then, the protocol may be able to do what’s needed (something similar to a simultaneous open). However, in case of UDP: Listening is quite okay, “simultaneously trying to establish a Connection” really means nothing - no packet will be sent. I believe that applications doing this over UDP will implement their own “SYN”-type signal for this - but then it’s up to the application, which, over TAPS, would need to: create a passive connection, send a Message from that same connection, i.e. using the same local address and port (I believe?) - which is a matter of configuring the local endpoint, and doable if the underlying transport system can pull it off. I think there’s nothing in our API preventing an application from doing this. If all of the above is correct, the problem that I see is that an application may ask for unreliable transmission, and then have one case in which calling “Rendezvous” is the right thing to do, and another case in which it is not - so it would need to be able to distinguish between these two cases: is it a protocol with Rendezvous-capability, or do I have to do it myself. Maybe, in case of UDP, the call to Rendezvous should just fail, and our text should say that Rendezvous failing would inform the application that it would have to implement it by itself, by issuing Listen and then sending a Message to the peer. Thoughts? Am I completely off track, or is this something that needs a text fix? Cheers, Michael"} {"_id":"q-en-api-drafts-0c401721adf38c6cdf546f7119302b5d6e6716dbb8fd864eb73337b2c513b8df","text":"(well, the priority framework is still confusing, this PR makes it clear that that is intentional, and why)\nMartin (I think) said that he'd check this offline after the Sep 11 interim.\nAt 8.1.2 you say lower values have higher priorities for connPrio. But later, at msgPrio, higher values have higher priority, and in the examples where you speak of the interaction of connPrio and msgPrio, you imply that higher means higher for both properties. Please check for consistency and clarify.\nLower values having higher priorities for connPrio is a remnant that we must have missed when we flipped this, e.g. PR which I don't know... too many iterations have mangled the text.\nAha, now I see what happened: following some more PRs, also , in PR , NAME re-introduced some text saying that sends on Connections with lower-priority values will be prioritized over sends on Connections with a higher-priority value. This reverses the logic for some part of the text - I suspect that NAME (very understandably!) forgot what happened and this created this mix. We have been editing these documents for way too long!\nI'm not sure the priority framework really fits together. Sec 6.4 says we 'should' do something that sounds like WFQ, but 7.1.5 allows the application to decide what happens. Meanwhile message priority suggests a strict priority-based scheduler. Meanwhile, in sec 8.1.3.2 \"Note that this property is not a per-message override of the connection Priority - see Section 7.1.3. Both Priority properties may interact, but can be used independently and be realized by different mechanisms.\" At the risk of overindexing on a partly-baked draft in httpbis, how would I implement strict priority of HTTP/3 streams over a taps API that implements QUIC? Would I need to request strict priority in both connection priority and message priority? Or I guess connection priority would be sufficient?\nWe seem comfortable with the mechanisms in the API, but we should have more text connecting the various sections explaining priority.\nMy comment concerned \"7.1.3. Priority (Connection)\", my thought was this just influences the sender-side to control queuing within a connection group. If so, if it worth calling this property \"Group Priority\" and explain this is sender-side?\nI find the priority framework perhaps overspecified at the connection level and underspecified at the message level, with the interaction between the two left explicitly unstated. I would almost rather see the priority type and meaning left completely implementation-specific; otherwise, my suspicion is that someone will write an application using numeric priorities, declare victory when it works fine on one platform, and not understand that the behavior is not universal. In general, I think if TAPS is going to produce a consistent, easily-understandable, and portable API, it needs to stick to easily-understood abstractions. Message-level priority is sort of like the TCP URG mechanism (though better specified and more useful, for sure), but I'm thinking urgent messages sent explicitly out-of-band (e.g., new Connection) with implementation-specific QoS will produce more consistent, predictable, and understandable outcomes in the pessimal case.\nOk for me (and thanks for doing this!)Some nits, but I believe this addresses my concerns in the issue."} {"_id":"q-en-api-drafts-9220e53f190070fdef15bbd5d7476fda07a0d6e2af62b4a54f3dfb579d5bdf92","text":"This provides more detail on how to implement Clone with TCP, and clarifies that SCTP's \"Initiate\" should not yield a stream of an already existing SCTP Association in case \"activeReadBeforeSend\" is Preferred or Required.\nCurrently the \"Clone\" part of, e.g., TCP (section 10.1) only says: \"Calling Clone on a TCP Connection creates a new Connection with equivalent parameters. The two Connections are otherwise independent.\" This text should reflect that equivalent behavior must be maintained upon later Connection configuration calls. And, there should be more text in general on cloning (see issue ).\nThis text must refer to a new \"do I want a Connection that supports initiate followed by listen?\" property. This property will be added to address - so I'll address this issue when the PR addressing has landed.\nAdded a question for NAME"} {"_id":"q-en-api-drafts-a668122bca696bcb2ef87e2a576c617a3be57493f3b2583681af31ce31ef0232","text":"Now I feel like I'm shaving the yak. This is more than what we talked about in the interim, but it felt like the following sentence required additional clarity. Let me know if I misunderstood the intended meaning, in which case I will rephrase."} {"_id":"q-en-api-drafts-706c2a71a77deda77271a736ceb223b118da801cc6917ad39674d0fc4a9c26b8","text":"This changes the use of the term \"entanglement\" as discussed in , and moves some text around in this section for better clarity and less repetition. Also, addressing , it introduces a framer as an optional parameter to the Clone call.\nNAME I incorporated all of your suggestions, thanks!\nThis is not fully clear from section 5.4., however, if connections in a group are each QUIC streams, I think it should be possible to use different framers on each stream. So it might be okay that a framers is copied when clone is called but changing a framer on one connection should not change the framer on the other connection in a group...?\nAlso an related editorial comment (that could be addressed in the same PR eventually): this section on groups only at the very end has a paragraph starting with \"Note that calling Clone() can result in on-the-wire signaling...\" which kind of explains that groups are supposed to be used for multistream protocol but can also be mapped to separate connection if the protocol doesn't not support multiplexing of stream on one connection. I think this information should be added at the very beginning of this section.\nActually, now looking at the framer section, I guess you would rather like to already set the new framer before you clone a connection...?\nI agree with everything you say here; about your last suggestion, maybe a framer should be an optional argument of the clone call?\nI don’t actually understand the difference between a connection in a Connection Group and an “entangled connection” ? Both terms are used in API, but in different ways (possibly by different authors) - do we really really need the word “entangled”?? if we do we need, can we define and explain this.\nI think that having the word is good as it indicates a deeper relationship than mere \"grouping\". Just based on terms, I'd (correctly) imagine that I can define priorities between members of a group, and that they somehow belong together... as in: Close with a common call, etc. ... but I don't think that I would expect, just from this term, that changing a property on one of them affects all the others. Hence the phrasing: \"The connections within a group are 'entangled' with each other\". However, I agree that in most other places, we should talk about \"groups\" instead. I can try to fix this.\nSounds good - we should add that explanation upfront in the definitions then also?\n[Yes, I think so.] EDIT: no, now I don't think so anymore. (see )\nI have addressed this in PR . Note that the text interchangeably talked about both Connections and Connection Properties as being \"entangled\". I have now written this such that the term \"entanglement\" applies to Connection Properties only, whereas Connections in a group are, well, members of the same Connection Group.\nLooks good to me."} {"_id":"q-en-api-drafts-8823eb843c4119aa54bec9d58acae8e6a86a7b3630d8a0f8d9f290596a134434","text":"The text says What does disabling mean? Does this mean mean that it is valid to set this property to \"Prohibit\" but still set \"safelyReplayable\" to \"Require\" on the PreConnection. Should that not be an error? This is also related to . The use of safelyReplayable for both reliable and unreliable services makes the text a bit confusing. Would it not be sufficient to state that if you do not Require Reliable Data Transfer you have to be prepared to deal with duplication?\nWe should remove this sentence in 4.2.5:"} {"_id":"q-en-api-drafts-c3ecd0fd8ff7a2998ad0f6e6bc8914f8a28d2b3f06dc037d9c8900836eca0772","text":"For other parameters you have to pass in an empty object if you do not want to set anything but not for SecurityParameters. Unclear why the parameters are treated differently?\nMake security parameters non-optional Remove \"Transport Properties MUST always be specified while security parameters are OPTIONAL.\" Define that there is an implementation-specific security parameter to disable security\nLGTM, thanks!"} {"_id":"q-en-api-drafts-7879c35ccb9995ef783b420ea93a7d8117668296bd2b7fd462719cedbf6d6403","text":"This tries to write something small but specific about Network Policies\nLGTM\nIn Sec 2: Should Section 3.2 also talk about network policies, e.g. a certain network supports a DSCP that the application desires?\nAs discussed in Feb 2021 interim, this seems like a rabbithole. Gorry will figure out whether it is and write language if not, or close if so.\nI'd like to add a couple of lines of text here to bullet 2 that discusses how you might learn that an attached network supports certain features - e.g. through a PvD, routing advertisement, or local discovery method, and suggest that this could enable a system to know which paths have specific characteristics.\nI propose this as a starter: When additional information (such as Provisioning Domain (PvD) information {{RFC7556}}) is provided about the networks over which an endpoint can operate, this can inform the selection between alternate network paths. Such information includes network segment PMTU, set of supported DSCPs, expected usage, cost, etc. The usage of this information by the Transport Services API is generally independent of the specific mechanism/protocol that was used to receive the information (e.g. zero-conf, DHCP, or IPv6 RA). Thoughts... ?\nTried to create a PR to add some policy to the path selection\nI think the policy is separate from the API.LGTM"} {"_id":"q-en-api-drafts-9a3ecb472be58d7ee18fb10a7d07ce6d1f3e5760f5ece79eda1798f19587bae7","text":"Applied requested changes\nIn 3.2 it is clear that you can set Message Properties on Connections and Preconnections, but the rest of the document is not very clear about this. In 4.2 for instance the text only talks about setting Selection Properties and Connection Properties on PreConnections. In 7.1.3 the text says \"Connection Properties describe the default behavior for all Messages on a Connection\". But I think that Message Properties set on a Connection or PreConnection may also describe the default behavior so this relationship should be clarified. Also, the Message Property defaults given for Properties that do not depend on other Properties sets the defaults for all Messages in a Connection I assume? I also assume that a Message Property set on a Message Context overrides a Message Property set on a PreConnection or Connection, but this is also not specified. In 5.4. Connection Groups, I assume that setting Message Properties on a Connection applies to all Connections in the Connection Group? This is not clear.\nIn general, I agree. If you set a message property dynamically on a connection, though, I don't think it would affect the group—I could have one stream in a multiplexed protocol have separate behavior, such as priority.\n+1 to tfpauly. But this is already written in 5.4, maybe you missed it (Anna)? \"Message Properties are also not entangled. For example, changing Lifetime (see Section 7.1.3.1) of a Message will only affect a single Message on a single Connection.\"\nI am fine with an update of a Message Property to a connection applying only to that connection. Priority is not a good example as it is already a special case, but my main issue was that we need to be more clear about the behavior in the specification. NAME the issue is about setting a Message Property on a Connection. The text you refer to talk about setting Message Properties on a message. You could interpret the first part as a general statement, but the way the text is structured it is easy to think it refers to setting Message Properties on a message so I think we should be explicit also about setting Message Properties on a Connection. Could be a second example.\nSo after looking at the text again, I think the example talking about setting Message Properties on a message should be removed, see comment in the pull request.\nThanks!!"} {"_id":"q-en-api-drafts-95719253b38fc8d19dbaceb0ff88149b6824380fc0feffaa4309ecbd2b0b974b","text":"Explain the different use-cases on specifying interfaces to\nThere's something obvious I'm not seeing: if I want to specify a multicast or link-local address for an endpoint, why not simply specify that address? What's the use of adding the interface into it? Some addresses will inherently limit interfaces; I thought interface preference was about preferring wifi to 4G, etc.\nLike, in section 6.1.1:\nMulticast groups can be scoped (see RFC7346), thus, may only be available on certain interfaces or even have different meanings based on the interface. So for Interface- and Link-Local scope, specifying the interface is strictly required. For others use cases (like coping with non-unique use of RFC1918 addresses) this may also be helpful.\nWhy does the API have both the ability to say LocalSpecifier.WithInterface and to set interface requirements on TransportProperties? Why not just keep the latter and remove the former?\nThere's history here: LocalSpecifier.WithInterface predates explicitly setting the interface name in interface requirements. There might be reasons to keep this: LocalSpecifier is more likely to show up in application UI than in transport properties. Any text proposed to keep both would need to have an unambiguous way to resolve conflicts between the two. This might be a source of hard-to-debug errors.\nTo discuss next interim"} {"_id":"q-en-api-drafts-1b9dce729795c271cd1221f28cc5d49a13863796b518792711076a93583c301f","text":"Sections 4.1 and 4.1.3 introduce to configure an Endpoint Object. The address supplied in examples appears to be an address literal. However, Section 1.1 does not define any types explicitly associated with an internet address -- or could work, but I don't think that's the intent for these APIs, either. Section 3.2.2 does not extend this. It's unclear whether this literal is intended to support IPv6 scoped addresses as described in [RFC4007]. While this RFC describes a textual representation (possibly not in line with the goals of the API), [RFC6874] describes how to represent zone indexes in an address literal. If zone indexes are not intended to be supported, this could be clarified by, when \"IPv4 and IPv6 address\" is mentioned in section 4.1, also citing [RFC791] and [RFC4007] with the respective address family. If zone indexes are intended to be supported, ambiguity exists around the use of and on a Local Endpoint, where the supplied address has a zone index that does not match the supplied interface. This could be clarified by: Defining type names that represent IPv4 and IPv6 addresses in section 1.1. Specifying the properties of an address literal supported by that type (referencing [RFC791] for IPv4 and both [RFC4007] and [RFC6874] for IPv6). Clarifying in section 4.1 that \"IPv4 or IPv6 address\" are of the respective type described in section 1.1 when used in or . Clarifying the behavior of zone index and interface choices on a Local Endpoint in a new section 4.1.4. This may tie in to to some extent in that this is unlikely to be discoverable until is called. [RFC791]: URL [RFC4007]: URL [RFC6874]: URL\n(note this is in §6.1/6.1.3 post-)\nMention that the %en0 addition to the IPv6 address is equivalent to explicitly setting WithInterface. Also mention that these are implied to be system types for addresses\nLooks sane"} {"_id":"q-en-api-drafts-95a39908888afd61070bd5e76fbd34e4c061f5dd67f3d4296e747d9c40a03e6d","text":"I think there's a need for more discussion and clarity around the restriction that \"An Endpoint cannot have multiple identifiers of a same type set.\" Why not? You effectively are letting the endpoint have (potentially) multiple IP addresses when you create it with an identifier that is a DNS name. This feels like an artificial, inconsistent, restriction given the rest of the document (look at the 3rd paragraph of 6.1).\nI think the right thing to do is to delete \"An Endpoint cannot have multiple identifiers of a same type set. That is, an endpoint cannot have two IP addresses specified. Two separate IP addresses are represented as two Endpoint Objects.\"; if you agree, please do so.\nI don't agree that we should delete this. This has been a quite important point in our experience with the API. To make the system work well, resolution should be handled by the TAPS system. If you want to connect by a hostname, provide the hostname—don't try to cache the addresses (often incorrectly) and just pass the same set of addresses again. Note that this doesn't stop multiple addresses from being added to a preconnection. We have this example: Just, one endpoint is a single identifier. You can pass multiple if you want multiple.\nNAME\nDoes this mean the API can't distinguish a remote endpoint that's known to be a single interface with two IP addresses assigned, which would have to be passed as two objects, from two remote endpoints each with a single interface? There's no way of saying \"these two IP addresses are known to be the same remote interface\"? If so, does this matter?"} {"_id":"q-en-api-e7ba696dc0f978d059d004dffd6632804908dcfca9332311f98f7dd0c98ec160","text":"Satisfies\nlooks good.\nShould we hold the addition of this until we have something that depends on it being in the API?\nWe certainly can, although if we want to use a key as discussed in the WG meeting for the purposes of ICMP validation, having a way to add a key here is nice for removing a bootstrapping step. Hopefully we can get an updated ICMP document out with the API document, and they can reference one another in this way.\nDiscussed in London, remove this.\naccepted"} {"_id":"q-en-api-2fcd6b82c58e001e659eac0ce096ff66a918d986f45806f92d562e737b5a659d","text":"Add key for extending sessions. Implements .\nI want to be clear that there is no extend-session API here. It's just an indication that the user could do that somehow, in a browser.\nNAME correct, there's no specific API to do so. It just indicates if there's a possibility of the user doing so. Future mechanisms for doing so automatically are left for the future =)\nBoolean value to add. Suggested names have been and .\nLGTM"} {"_id":"q-en-bundled-responses-6659d95f5271c9e4823416a36e53201960ba12eb78611f2a85747acc7b1eb5b6","text":"This PR removes the manifest section. The motivation is to factor the \"core\" Web Bundles specification to include just the minimal amount needed to support cases like subresource loading, which does not need a manifest. This and other removed sections can be defined in other documents in the future. Adapted from URL\nWe might want to make sure the impact of removing manifest section from the bundle format on the chromium. NAME NAME Does Chrome have any usages of manifest section? It seems doesn't use it, however, I am wondering whether there is existing use case for that, I am not aware of.\nI never really understood the motivation of this section, since the primary URL response can contain a header pointing to the manifest.\nImagine the situation of application distribution, it is very useful to have a manifest section. For example, we can display the application icon without parsing the HTML of the primary URL. But I agree that the manifest section can be outside the scope of the \"core\" bundle specification. So removing it from the core spec looks good to me.\nLGTM (just for record). Thanks for working on this."} {"_id":"q-en-capport-wg-architecture-658dbb9aa1626048695481c94064ea77177065685dd87cf35ac8650883f6d5c8","text":"We never defined what RA was. Do so by expanding it to Router Advertisment in the abstract.\nNote: it's still not clear to me whether we need to cite the RA/DHCP/etc RFCs, or whether they're considered 'well-known' enough to be left alone. Or, is that a questdion for the rfc editors?\nI'm comfortable with taking DHCP and Router Advertisements as assumed knowledge. Both are on the .\nWe never define what an 'RA' is, but we use that term in section 4.1. We should consider expanding it to the full term (Router Advertisement). We may also want to add an informative reference to it when we first mention it. Note that we have the same problem with DHCP here -- I'm not sure offhand what the best practice is regarding terms that have fairly wide use, but may not be understood by everyone (and even what that set of terms is).\nRA is listed in the abbreviations document: URL but of course \"expand on first use\" seems like sound editorial practice."} {"_id":"q-en-capport-wg-architecture-ab987f38948654c75b4fd7ebb91ef1271e370a5fc7fabc54549b779cc428b221","text":"If we're using an IP as an identifier, dual stack can add complexity. Mention this.\nRaised during review that dual-stack is a use-case worth calling out explicitly. Some suggested text: \"Further attention should be paid to a device using dual-stack [rfc4213]: it could have both an IPv4 and an IPv6 address at the same time. There could be no properties in common between the two addresses, meaning that some form of mapping solution could be required to form a single identity from the two address.\""} {"_id":"q-en-capport-wg-architecture-ef1fb5e881f0d81563cacece8a214300eb5a52913c2e4cc4685f3ef94e829192","text":"Whether or not the user visits the User Portal is optional: the user could choose not to, or the Captive Portal may not provide one. Clarify that the case presented in the component diagram is the one where the User Portal is present, and where the user chooses to visit it."} {"_id":"q-en-capport-wg-architecture-18958366e9c15061e0eff49aa1246b524041557ca209bb416464acd393b84928","text":"A reviewer pointed out that we used notifying, informing, signalling, and alerting to mean the same thing: sending the Captive Portal Signal. I've chosen to use signal as the verb of choice. The phrasing seems a bit awkward in a few places, so I'm not 100% confident in my choice here.\nAlvaro writes, Let's make this consistent across the entire document. Doesn't have to be alert, but should be consistent.\nAre we using User Equipment of end-user device?"} {"_id":"q-en-capport-wg-architecture-969294612ec558dc6397d4f71191c2164a35ced30c562855230bb52d118c90d8","text":"The abstract seems weak. It should mention the problem domain and all of the components. Not sure if it needs to mention PvD at all.\nPersonally, I think that aside from the PvD mention, which can be removed, the current is mostly OK. Brief is good."} {"_id":"q-en-capport-wg-architecture-bbf38e460f61f113a3a126cf77228a04e4f1b74a10aea2d20950f3bbe54be2e1","text":"Just realized that, this section may also apply to the case of initial provisioning of Portal URI if multiple ways are used, such as: DHCPv4/DHCPv6/RA could be used at the same time, and they return different values; to quote 7710bis: However, if the URIs learned are not in fact all identical the captive device MUST prioritize URIs learned from network provisioning or configuration mechanisms before all other URIs. Specifically, URIs learned via any of the options in Section 2 should take precedence over any URI learned via some other mechanism, such as a redirect. Shall we mention anything on the initial provisioning case too here? or that will only in 7710bis?\nProbably not a lot, except that future requests might go there. Because we don't hold any state for these resources, that's probably OK.\nNAME do you think that you could take a swing at this?\nBy portal URI, you meant the URL to the json file? If so, it seems the ways how it could be changed are: In the case of DHCPv4/v6,in a renewal/rebind of DHCPv4/v6 lease, the end user equipment could receive a new portal URI in the DHCP renew/rebind response. In the case of RA, the end user device may receive a new portal URI in a new RA message. (In RA case, is it possible that there are multiple RA servers sending out different portal URI?) Is there any other possibilities? Do I understand correctly that you are suggesting add something like the following? If a new Portal URI is provided via DHCPv4/v6 renew/rebind or RA, use the new one. Where shall we add this section please? a new paragraph under 2.2.1. DHCP or Router Advertisements? or a new sub section 4.3 under 4 Solution Workflow?\nRight. I forget whether this was about the API URL (the JSON) or the user portal URL. In either case, they can change and I think that's OK. The RA case is interesting in the sense that it might imply a different router advertising different coordinates, as you say, but if it is the same router, then that should be regarded as a change. This came up in discussion and some people felt that it needed to be acknowledged. Your suggested outcome is probably the only sensible one: if you need to contact the API, use the most recent value you have. As for location in the document, a new Section 4.x describing how updated details can be acquired seems right.\nFixed in\nThanks for doing this. I have a few small suggestions.Looks good. Thanks!"} {"_id":"q-en-capport-wg-architecture-93ab77d80b7036706a9f2f8a126156027806ab54a3c18714722cbe1abf8cd751","text":"This rewrites the context-free URI section to give more context. It also moves the section out from underneath the example identifiers section, and into a proper sub-section of the top level \"User Equipment Identity\" section. Closes issue\nNAME , I mostly took your text word-for-word. Thanks for it!\nA reviewer writes, I think this is trying to say the server should not be looking at Ethernet addresses, for example, because the server is probably not on the same subnet as the User Equipment. So the info needs to be in the URL. Figure out what we're trying to say, and whether this is the right section for it.\nThat's exactly what it's talking about. That said, while it is related to identifying the UE, it's not an example identifier. It has been a while since I thought about this, and I wasn't at the meeting where it was discussed, but I think the intention here was to limit any URIs provide in the API to these context-free URIs. That is, the resource pointed to by the URI cannot depend on information not present in the URI. For example, the API draft (URL) provides a 'user-portal-url'. Suppose that URL is \"captive-URL\". That resource could be implemented by checking the source ethernet address of the connection to see if you're already logged in. But, that won't be correct if you're not in the same subnet. Instead, the URL should be something like: \"captive-URL\" So, this is more of a constraint on the API and how it uses identifiers than an example of an identifier. Did we just get the indentation wrong? Or was this a misunderstanding of what the working group requested?\nMaybe all this needs is a preface explaining how some systems operate. The last two paragraphs are a direct copy of existing text.\nHardly surprising that I think this is OK :)"} {"_id":"q-en-capport-wg-architecture-9dada0de0daa808ca7a948d10ad4a897c5168b22ac2b48d7a30e133f86734c3c","text":"Everyone was confused by the phrasing that the API needed to provide evidence it supported the architecture. Since it's a proper component of the architecture, it doesn't really need to; that is assumed. Rather, it needs to inform the client of versions/types/etc so that the client can determine whether it supports the API. Rephrase accordingly. Fixes issue\nThanks again for the suggested text, NAME\nA reviewer says, Suggestion: state that a version of the protocol should be included in the API response.\nIsn't the content-type of the API response sufficient indication? That implies that we might need a new content-type if we want to make incompatible changes in a subsequent revision, but I think that is a good plan.\nAt the architecture level, can we state the point more clearly? Maybe better to state what the User Equipment should do if the API response doesn't make sense?\nSo, the fact that we've discovered the API likely means that we're interacting with the architecture -- i.e. we got there from PvD or the dhcp option. So, I'm not sure what it would mean to not support the architecture in that case. It would be a provisioning mistake, I guess? We should suggest what the UE could do in that case, though, for robustness. So, maybe mention that the API should have a content-type in the response to indicate that it is indeed a captive portal API, and then what to do if the UE doesn't recognize that?\nSo I think that the reviewer here was simply confused about the choice of words and phrasing. How about: A caller to the API needs to be presented with evidence that the content it is receiving is for a version of the API that it supports. For an HTTP-based interaction, such as in {{API}} this might be achieved by using a content type that is unique to the protocol."} {"_id":"q-en-certificate-compression-7661445a968875726503f2603802fa0a2863b9f5d7ae946d12f1cd8defbd1efe","text":"State explicitly that we do not define any compression dictionaries. Explain the rationale behind the uncompressed_length field. Addresses comments from John Mattsson on the list.\nActually, I have a small nit."} {"_id":"q-en-certificate-compression-6d7d57a8f9ad445c99f71bd9d9a449af05e7080140d66ba1a8d66396bf8270d1","text":"Parking this here; if we have a consensus on the list to add it, we should merge this."} {"_id":"q-en-coap-tcp-tls-f823ec3f6e07113f31105cfef778f332767aa903fca48f9ff37a4ca633e2adb2","text":"The next pass will add text related to the default URI-Host for TLS SNI cases."} {"_id":"q-en-coap-tcp-tls-efd240c2c38b06fed2c6211767a338fc1553830e362008e558a7c8d833a5975a","text":"I realize that the specific review feedback was for the Introduction - but might we also want to \"minimize\" the number of WS configurations discussed in Section 4 to focus on the browser motivation?\nWhile the third configuration is not mentioned in the introduction, all three are motivated by in-browser app use cases. So I don’t see a need to cut down the discussion in Section 4. Grüße, Carsten\n: The introduction motivates the layering of COAP over WebSockets to allow it to be used in a network that only allows external access through a HTTP proxy. In fact, it is not necessary to use WebSockets to do so; HTTP CONNECT provides a tunnel in this scenario. WebSockets is a specialised protocol that allows bidirectional, stream-based transport from a Web browser -- it is effectively TCP wrapped into the Web browser's security model, to make doing so safe. Using it for other purposes (i.e., non-browser use cases) isn't really appropriate. I'd suggest either removing the WebSockets binding, or changing its motivation in the Introduction. For reference, the original URL motivation:"} {"_id":"q-en-coap-tcp-tls-a2cf8951b6cd9360241cd0e3f14d0bc8924270bf8b8874bb8e5dd777f0069039","text":"1: It is not clear how the protocol reacts the errors from transport layers (e.g. connection failure). The protocol will just inform apps of the events and the app will decide what to do or the protocol itself will do something? The WebSockets case is addressed by RFC6455: -and- After review with the authors, we'll append a clarification for CoAP over TCP in its Connection Health section."} {"_id":"q-en-coap-tcp-tls-a4f5e8912cf8bce90ed23561cb8c0764e0bc42ee8533a2640277ea6e7499cd1a","text":"Added tag for updates: Removed Observe references scattered throughout the draft Added appendix to collect clarifications to RFC7641 in one place\nCaptured from : There needs to be a section on how resources can be observed over TCP/TLS. Questions include: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up?\nURL In a conference call with Klaus, Simon, Bill and myself we discussed the issue and the raised questions. QUESTION 1: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? The text from the CoAP over Websockets could be re-used. Here is what it could say: Since the TCP provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications therefore MAY be empty on transmission and MUST be ignored on reception. The Observe Option still has to be sent since it contains other information as well. QUESTION 2: Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up? We believe it is enough to send keep-alive message for the CoAP over TCP draft (as opposed to send a re-registration message when used over UDP). QUESTION 3: How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? TCP offers a flow control mechanism as a means for the receiver to govern the amount of data sent by the sender. This is functionality that can be re-used.\nURL (1) -- I agree with the proposed solution. (2) -- we need text that empty messages are no-ops and can be used as a keep-alive mechanism. (3) -- this is a problem that most applications that start using reliable streams run into. Early implementations sometimes ignore the need for proper flow control. That is not a reason to add more mechanism (that would also not be in those early implementations), but the solution should be to use the flow control mechanisms we already have. Maybe an LWIG document could have information about flow control support in implementations, e.g., ​URL\nURL Has someone thought about the following setup, that is, will it work with keep-alives only? There is a cluster behind a load balancer. I client sets up a connection, is assigned to one node and starts observing. For some reason the node fails and through a fail-over strategy the client is handed over to another node. All further requests are answered transparently be the new node. However, the Observe relationship got lost...\nURL Subissue 1 is covered by the Websockets draft; close this when merged. Subissue 2 is covered by text in the Websockets draft; the failover problem described can be addressed by proper implementation of failover. This may however be costly, so we should have some more discussion about this. Subissue 3 is an implementation quality issue. Just don't implement stream-based without some flow control.\nOn the 7/6 call with the authors, NAME agreed to review and determine whether all issues were addressed.\nThis issue does not include a question about the use of confirmable messages for notifications in to determine client aliveness and interest: A notification can be confirmable or non-confirmable ... An acknowledgement message signals to the server that the client is alive and interested in receiving further notifications; if the server does not receive an acknowledgement in reply to a confirmable notification, it will assume that the client is no longer interested and will eventually remove the associated entry from the list of observers This is inappropriate for CoAP over reliable transports.\nThe phrasing is a bit unfortunate. What is meant is \"if the confirmable notification times out\". The same would be true in TCP -- if the connection times out, you remove the interests. We could clarify this in a section of CoAP-TCP about the impact of no longer sending back and forth acknowledgements.\nDiscussing the scope of this issue with the authors. This probably needs a more complete treatment rather than small references like: Since the TCP protocol provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications MAY be empty on transmission and MUST be ignored on reception.\nMy reading of the discussion thread gives me the impression that we had closed the initial set of questions. The subsequently raised issue regarding the wording in RFC 7641 I am wondering whether there is any change needed. We definitely are not going to change RFC 7641. I personally don't think that the highlighted text from RFC7641 will be wrongly implemented. I would suggest to close issue.\nNAME - this would be - any comments on that pull request?"} {"_id":"q-en-cose-spec-3f8676c05b600c2d77464b39f8d7b96bcf3909b182478623e21e81e3a447502c","text":"Correct descriptions of Direct and Key Agreement Direct Change examples in the back to have \"real\" examples - they are JSON rather than CBOR, but that makes them shorter and easier to read."} {"_id":"q-en-cose-spec-70798cdb730ea6dbb8d73307bc10d27d5cf2a6db34e527fff50e611d474158b0","text":"This was inspired by the IV computation for TLS. It may or may not make things more secure. The old behavior is easy to get by making the context IV have trailing zeros."} {"_id":"q-en-cose-spec-d217840cb387fed0d6159ea2a053797eedb6873f0d7f499c791d2037e87e50d0","text":"Create MAC as separate key operations per issue\nShould we attempt to add the MACverify and MACcreate elements to the key_ops field since we have decided to distinguish between a signature and a MAC - or do we keep the same signature/verify tags that are currently used in the JWK document?"} {"_id":"q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16","text":"MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one)."} {"_id":"q-en-data-plane-drafts-d4531563203a37f7d570208da5e346b1df46b1389f0b2a22dbc03804a586f861","text":"Proposed changes: 0, fixing Roman Danyliw comment 1, change introduction to highlight additional functions of data plane and the focus of the draft 2, highlight that implementation of PREOF is out of scope, pointing to related IEEE work (i.e., 802.1CB) 3, fixing the POF section and its configuration via controller plane"} {"_id":"q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1","text":"ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)"} {"_id":"q-en-data-plane-drafts-9fa1852480ee56c37aff2b999f9303dc57f9792cc5d2290750baf64fdcdd0eb2","text":"Based on 06.17 discussions. F-lables(s) added to Fig1 and text fixed. Adding procedures sections. Multiple notes added for clarifications. Adding controller plane section.\nreviewed in Jun 24 meeting"} {"_id":"q-en-dnssec-chain-extension-900a33e0218a06845f7ec2ef72b72b81740c98b0b01235988e08cca0303b9c7c","text":"Remove statement that the document describes the data structure in sufficient detail for implementers and describe more clearly how individual RRsets and the successive RRSig record set are identified by referencing the relevant sections in RFC's; and remove the in response to Eric Rescorla's DISCUSS point: IMPORTANT: I'm not sure that this is actually sufficient to allow an independent implementation without referring to the other documents. I mean, I think I pretty clearly can't validate this chain from the above. Similarly, although I think this is enough to break apart the RRSETs into RRs, it doesn't tell me how to separate the RRSETs from each other. I think you need to either make this a lot more complete or alternately stop saying it's sufficient."} {"_id":"q-en-documentsigning-eku-c2861e462192a27b75cfdf35f1f5df53cad3d4144e2f0153119e2b72a4e0da45","text":"I reviewed and approved this pull request.\nUpon further reflection it seems like RFC 7299 and 8358 are informative instead of normative references."} {"_id":"q-en-draft-ietf-add-ddr-4f6144652abed222f817c5e74f9a51343122ab6ed7d34c02747353b4fa8eeba5","text":"Changed 2 instances of DEER to DDR\nI was so focused on finding \"equivalent\" that I missed a couple deer apparently. Thank you!"} {"_id":"q-en-draft-ietf-add-ddr-411fe2a0029a5109cbc237d96d5a74cb16a102e4e701f4bc464f18370f730a58","text":"Addressing issues and to make the SUDN forwarding a SHOULD NOT unless the operator understands and accepts the implications to authenticated discovery.\nThis is exactly what was intended, so we're on the same page.\nMigrated from original DEER issue: URL\nThe Special Use Domain Name (SUDN) URL needs more clarification. Specifically: Queries against the .arpa nameservers The draft should mention that .arpa is a delegated zone, currently served by the root nameservers. Clients that query a resolver that is unaware of this requirement will \"leak\" queries to the root nameservers. When URL is not present in a resolver configuration Section 6.1 discusses how a forwarder should behave and suggests that results should not be cached other than for Equivalent Encrypted Resolvers under their control. How is that determined? There is no discussion around a non-forwarding resolver's behaviour. Should it return NODATA? Forwarding queries to URL will attempt to attach a client to an upstream resource's equivalent resolver. If that equivalent resolver's cert doesn't include the forwarder, the client must reject the answer. Is it not therefore the case that the forwarder caching behaviour is irrelevant? All valid responses to this query must be equivalent to the forwarder itself. When URL is present in a resolver configuration I would suggest some clarification wording something like this: The expectation is that this record set will detail equivalent resolvers. A client that chooses not to interact with any of these equivalent resolvers SHOULD continue to use the unencrypted resolver, honouring the SVCB TTL. When the SVCB TTL expires, the client may choose to re-query in an attempt to discover newly available equivalents. I would suggest (prefer?) changing the SUDN behaviour so that queries against it are never sent upstream. Resolvers (and forwarders) that do not have a configured URL RRset should return NODATA. This would mean that URL SUDN RRsets are always specific to the IP queried.\nIt may be good to model the advice after RFC8880, which defines URL\nI am leaving this open so I can re-review the text after addressing other PRs; PR 14 addressed some of the points raised in this issue but not all of them.\nMigrated from original DEER issue: URL\nThe text says This is not true of any reasonable forwarder. RRsets are atomic collections, and cannot be mixed from multiple sources. Doing so would entirely break DNSSEC, for example. Unless you know of a forwarder that exhibits this seriously broken behavior, I don't think it bears mentioning.\nAddressed with PR 14\nTypo, but this seems like a reasonable tweak. The description on list was a bit worrying regarding IP address, but this just says don't forward unless it is deliberate. And by deliberate, I mean that the forwarder knows it will work or that clients might not care. That seems ok."} {"_id":"q-en-draft-ietf-add-ddr-410063f073b488a4d3b3f4a98cc953ec8b8e69f898a2ae33d0b34e49769bb94c","text":"Three changes Re-ordered statements in the abstract to place the restriction on unencrypted bootstrapping next to the unencrypted statement (instead of after the encrypted bootstrapping statement, which makes the restriction seem like a conflict where it excludes bootstrapping from a name instead of an IP address) Re-ordered security considerations (placed the \"clients can opt to do other things\" statement at the end after the MUSTs/SHOULDs for this doc) s/client/clients for grammatical correctness"} {"_id":"q-en-draft-ietf-add-ddr-ccd7e1174853ca1af325c8fc33fce67e721d77d0d81cc24bbfb30225e1222d60","text":"NAME does this work for you?\nOK, so I said LGTM, but how does a server know that it is such a server? Can't I just publish a random DDR record for 1.1.1.1?\nAs I mentioned in the session, this is partly covered in SVCB-DNS, but perhaps we need a tighter rule. I've opened an issue to discuss it: URL I'm not entirely sure which draft this kind of text ought to go in. If you think this logic applies beyond DDR (e.g. in ADoT cases), then it probably should go in the SVCB-DNS draft. Otherwise, maybe DDR is the place.\nNAME for security considerations, I'm OK with multiple drafts having warnings. I think this particular attack should be mentioned in DDR, since other uses of DNS SVCB could be over more trusted channels.\nNAME if your resolver responds to URL, then we expect the address of the resolver I asked to be covered in the cert. So, if I send a packet to 1.1.1.1, 1.1.1.1 has to be covered. So, if you just provide a \"random\" DDR record in response to URL, you can only point to some resolver that has your IP address in its cert, which presumably is sufficient coordination for the resolver to \"know\" it has this set up.\nRight. Good point.\nAs mentioned at the microphone, if we are to allow the query-path to be specified in the clear like this, we need to require that servers not have different responses with different query paths.\nLGTM"} {"_id":"q-en-draft-ietf-add-ddr-aed8c0a21128a907c4162b2165ced910f7b325c9e7b40a8fff1925cfab44bdea","text":"I think that the answer here is that DDR resolution is SVCB-reliant as defined in the SVCB spec. Can you make that explicit please?\nsays It sounds like you're saying that DDR should adopt this without adjustments. That seems reasonable to me.\nRight. I sort of assumed that it was SVCB-reliant, but it doesn't say that. Certainly, this is not a case where any deviation from spec is necessary. (The more of those we have, the better.)\nDDR predates when the reliant and optional terms came about, which is why it wasn’t covered. I’m not clear on why anything would need to be said here. Reliant and optional seem to be modes for the client to block on getting SVCB records or falling back and ignoring them if they don’t arrive. Discovery based on IP addresses is done with an SVCB query for URL, and no other query types are allowed for URL By definition, this only works with SVCB. That doesn’t mean that a client can’t be SVCB optional for its general operation — it just means that if SVCB is blocked or fails, you don’t get to discover the encrypted resolver. What do we need to say here that would actually benefit things?\nSo SVCB-optional doesn't even make sense for this specification. I was just looking for a nudge in the form of: \"The resolver/client uses SVCB in a SVCB-reliant mode.\" Just so that there is no confusion.\nYes, optional doesn’t make sense. But I still think that the entire reliant/optional choice is orthogonal to how DDR works. These are the two lines that mention reliant mode in the SVCB spec: Both of these seem to be about client behavior for connection establishment to a hostname. It’s specifically about connecting. It doesn’t make sense to connect to URL; this is a special purpose SVCB query used for discovery, which can lead to a connection. I don’t see any requirement from the SVCB spec that protocols that use SVCB need to define themselves as “reliant” or “optional”. Is there something I’m missing there?\nHow is this not about connecting from the point at which you feed the name into a SVCB query? A DDR client is trying to connect to a (secure) resolver. To that end, it is making a SVCB query. That it uses \"URL\" doesn't change the overall goal.\nUntil a client uses the SVCB query for discovery, it doesn’t know if an encrypted resolver exists that is designated. I’m not sure if saying a DDR-aware client is “SVCB-reliant” means that it will not use the unencrypted resolver for application queries until it discovers an encrypted resolver (which is not a requirement, obviously, since clients can be allowed to continue using unencrypted DNS if their local policy allows it), or it means that the connection to the discovered encrypted DNS server is SVCB-reliant, which is tautologically true since the existence of said server was only discovered via SVCB. If it’s the former, we shouldn’t say it because it’s really about client policy for allowing unencrypted DNS. If it’s the latter, I don’t see the value of adding anything, and I’m not sure it really fits the meaning of “reliant”.\nI believe DDR clients' recommended behavior is exactly the behavior recommended in Section 8.2 of SVCB-DNS, quoted above. It is initially SVCB-optional, and becomes SVCB-reliant once SVCB resolution succeeds. Specifically: if you do discover an encrypted resolver, and you attempt connection, you SHOULD NOT fall back to unencrypted if that connection fails. Otherwise, you've added a downgrade vulnerability to attackers on the \"transport path\" (which may not be the same as the bootstrap-query path). As this is the default recommendation for SVCB-DNS, I don't think any additional normative language is strictly required. However, I think a reminder might be appropriate. This does not address the question of whether to wait for SVCB resolution before issuing any initial queries. As a practical matter, I don't think we can require that.\nOkay, I agree with the behavior you describe there of being initially optional and then becoming reliant. I can try to find a place where it makes sense to reference that section."} {"_id":"q-en-draft-ietf-add-ddr-067e4d4c72d70d4001842c759dbca95b070eb66eda6b644d48e8f987e9c72b1e","text":"I have committed all request changes. Thank you for the refinements.\nThe IANA Functions Operator has a question about one of the actions requested in the IANA Considerations section of this document. The IANA Functions Operator understands that, upon approval of this document, there are two actions which we must complete. First, IANA understands that this document calls for the addition of \"URL\" to the Special-Use Domain Names (SUDN) registry established by RFC6761. RFC6761 specifically requires that any request for a Special-Use Domain Name needs to contain a subsection of the \"IANA Considerations\" section titled Domain Name Reservation Considerations. The current draft does not have such a section. IANA Question --> Could the authors either conform to the requirements of RFC6761 or provide explicit guidance as to why that section is not appropriate? Second, in the Transport-Independent Locally-Served DNS Zone Registry on the Locally-Served DNS Zones registry page located at: URL a new registration will be made as follows: Zone: URL Description: DNS Resolver Special-Use Domain Reference: [ RFC-to-be ] The IANA Functions Operator understands that these are the only actions required to be completed upon approval of this document. Note: The actions requested in this document will not be completed until the document has been approved for publication as an RFC. This message is meant only to confirm the list of actions that will be performed. For definitions of IANA review states, please see: URL"} {"_id":"q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6","text":"DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits"} {"_id":"q-en-draft-ietf-add-svcb-dns-2c0ddfcea519d5c6a039686cd7111beee128cbbbcafe315840c584540b10ac85","text":"Closes URL\nAllowing the use of nonstandard port numbers in SVCB for DNS provides valuable operational flexibility, but it also creates a risk related to cross-protocol confusion attacks, in which a malicious server convinces a client to send TCP contents that can be misparsed as a different protocol by a non-TLS victim server. For example, a malicious DNS server could indicate an SNI or session ticket that can be misparsed as a command by a recipient server, and use DNS rebinding to change its own apparent IP address to the victim server. Clients would then send this server-controlled payload to the victim server. This is dangerous if the client is implicitly authorized to control the victim server (e.g. authorization by IP address). Note that only DoT and DoH (over TCP) are amenable to these attacks. QUIC Initial encryption ensures that the packet has no meaning to a recipient that is not QUIC-aware, and any QUIC-aware recipient should not be vulnerable to these attacks. We should consider ways to minimize the impact of these attacks. For example, we could require clients to restrict the value of the parameter to a small fixed list, or refer to the . Credit to NAME"} {"_id":"q-en-draft-ietf-dnsop-glue-is-not-optional-e5b483ccd811a8d8faae54190017b6bf9524ad658abd646ce96bec4cd802ffb2","text":"…out presence of glue in zones.\nNo further discussion on list nor here so I will merge this.\nFrom the dnsop list, Ralf Weber suggests the draft could more clearly state that a name server might provide an \"incorrect\" response because it wasn't given necessary glue records in the zone data."} {"_id":"q-en-draft-ietf-dnsop-terminology-bis-45b23aec9752a2349576b21cf4ca0f5d817c2a4ad5d49bd1133604f0c08c32f1","text":"…empt to define \"resolve\" and \"resolution\".\nThis issue concerns consistency and definition of some terms in the document. \"Name server\", \"name server\" and, perhaps, \"server\" seem to be used interchangeably. None of those terms have a formal definition in the document. The terms are also used in conjunction with various modifiers like \"authoritative server\", \"DNS server\", etc. While all those terms are likely pretty well-known, for the purposes of this document, un my opinion, it would be helpful to give a generic definition of \"server\", and how it is then combined with modifiers to form terminology for specific types of servers. Based on the definition of \"server\", it would be helpful to include a definition for \"name server\", \"name server\" or \"DNS server\" (pick one) and then use that term uniformly throughout the document.\nAdded a note about this in URL\nFixed in\nThese are common terms that are used in contexts that are not always associated with the resolver itself, so they should have stand-alone definitions.\nFirst attempt at definition in URL"} {"_id":"q-en-draft-ietf-dnssd-srp-b4ebdeb3c00c06745a21c9bbe8eb2d918811cf617374528ce210c7f4875cdb09","text":"This is part of the previous pull request, but I missed it because it was interleaved with some unrelated changes."} {"_id":"q-en-draft-ietf-dnssd-srp-19ecea136cf34e7f40598026ee66fdf7dff48601bfa02fd0ba7856ef8f5d5320","text":"The set of constraints for the host description includes a constrain that A and AAAA records must be of sufficient scope. This is asking a bit much of the SRP server, which may or may not be able to make this determination. There may also be cases where an SRP registration with an address of less than global scope is useful. Consequently, this constraint is removed, and text added to talk about how servers should deal with addresses of less-than-global scope."} {"_id":"q-en-draft-ietf-dnssd-srp-3c41bedb984657511d2abc95fcff28ab8f75d953f375d3aa53f58a20da9fe35c","text":"Update the draft number, date, and change draft-sekar-ul to draft-ietf-dnssd-update-lease."} {"_id":"q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b","text":"I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do."} {"_id":"q-en-draft-ietf-doh-dns-over-https-373d1c46d59457c7f4eed5cdb925ae26390299eb9763dbfd83040196a340ebe7","text":"This text is great justification to include in a charter, but this is a protocol specification and it really doesn't add much.\nIt seemed like the WG wanted the document to be shorter and more specific, so this feels good."} {"_id":"q-en-draft-ietf-doh-dns-over-https-e84cdd10af357f1a6788d78ac31823eaddc5ef82fc201340591f72ad51fbf43a","text":"… but keep the requirement for future formats.\nAttempts to\nAs input to the process, this was extremely valuable, but now that this is in WGLC, we're done with it. Does the audience of the ultimate RFC need this information? The only thing that I can see that is worth salvaging is a statement to the effect that DNS64 isn't explicitly supported.\nYes, we still need it. We have not gone to IETF Last Call and, despite what some people think, the IETF gets to change a protocol significantly during IETF Last Call. But even after that, once it is an RFC, others will want to update the RFC. It is important to keep the requirements for them as well.\nActually, we probably don't need it in the RFC as long as the one future requirement (for formats) is still there. Please see PR"} {"_id":"q-en-draft-ietf-doh-dns-over-https-c68589fdadcb6bedbe989e8003945452ca46cfcfa6cc6f5f3df8b4c0343bb298","text":"…raft-ietf-doh-dns-over-https/commit/3f74bd6e0760f96717ffd3e667a37e13b45839c4#commitcomment-28865666 and URL that this wording is much clearer.\nSara, making this pr was on my todo list - thanks for the time saver. Paul sent me an IM about it, so I'll ga and merge."} {"_id":"q-en-draft-ietf-doh-dns-over-https-fdb58a792cd80b531f12de84f6e8b8f1ea3408564a53c1182012f3927bab9ad0","text":"this paragraph is true.. but its a tad confusing and I don't think really describes anything germane to doh.\nNAME WFM to just remove this text"} {"_id":"q-en-draft-ietf-doh-dns-over-https-be29419b942129087a4e1ddc5436aaffe0e95df772b2e3b37b1c9c5899e02019","text":"Is there a reason not to make a recommendation for the case of a DOH-only service? The current text says: Implementations of DoH clients and servers need to consider the benefit and privacy impact of all these features, and their deployment context, when deciding whether or not to enable them. Would you consider a recommendation like \"For DOH clients which do not intermingle DOH requests with other HTTP suppression of these headers and other potentially identifying headers is an appropriate data minimization strategy.\"?\nNAME That could be added, but it would need a lot more text because your proposed text only deals with one layer (HTTP) while the text above it also talks about fingerprinting with TLS, TCP, and IP (in reverse order). The WG can decide whether adding recommendations for just one layer is worthwhile or possibly misleading about the overall privacy.\nNAME It was only an example, and I would be happy to expand. That said, I think this is the most likely source of leakage: folks using a general purpose HTTP subsystem that carries information not needed for DOH requests. Actually recommending data minimization there, rather than saying \"it should be considered\" seems like a worthwhile thing to do in my personal opinion.\nAgreed with Ted (on the list) that this conversation should be on the list. Locking it here.\nI have unlocked the comments so that people can propose specific wording changes as diffs in the PR itself; most questions and suggestions should be done on the list."} {"_id":"q-en-draft-ietf-doh-dns-over-https-21c0aff5795314314cfe7fd3055629adbaae37b0430d38066109eff761913203","text":"This text is rather subjective without data. Changing it to \"may\" still brings the point without needing a reference to a study proving the sentence true.\n(whoops; just realized I've been blowing my pull requests by chaining them off a single branch... my bad, and this will cause you some pain)"} {"_id":"q-en-draft-ietf-doh-dns-over-https-484e643d293fee6461bc2861186ef1d09c5dedd1527e26c443d9e73945943114","text":"this is a little longer than I had hoped given that bootstrap deadlock is not a new problem for dns introduced by doh.. but the https validaton angle is worth mentioning.\nIf DOH uses https under the hood, how does this https host name get resolved? I would have expected this to be mentioned in the RFC, though I didn't see any mention of it, besides OCSP Stapling. Is it assumed that DOH would need to be bootstrapped by traditional DNS? Or is it assumed that a DOH endpoint would have a TLS/SSL certificate for its ip address? (Apparently a certificate for an IP address is possible in some cases, though hard to get.) I assume DOH is not intended to be used by home NAT routers? (How would the router get a signed TLS certificate? Maybe a DHCP response could say: use this DOH url and trust this self-signed cert for its ip address (though a certificate might be too big for a DHCP response). Or, in cases where no local DNS services are needed, the DHCP response could maybe say: use this public DOH url, here's the ip address for that hostname)\nThe document doesn't describe how a HTTPS client authenticates HTTPS... that's on purpose in order to simply leverage HTTPS without surprises. as a practical matter a number of things can be done. One is TOFU as you suggest. Another is an IP based certificate which you also suggest. Another would be to lookup the name via another, perhaps less trusted resolver, knowing that HTTPS means the IP address isn't part of the security scheme anyhow - its just a route.. subject to blocking, but not to impersonation. Another would be to bootstrap the client with a name and an address instead of one or the other. but as a general matter I think this is one step beyond the scope of the document as its basically writing down the rules for HTTPS.\nThis issue semi-duplicates\nI think this should at least be called out as an operational concern. e.g. mentioning that you probably need an alternate resolver to find the DOH server, after which a paranoid implementation can re-query for the DOH's address as an integrity check.\nThis is the direction I have taken for . When opening a connection to the remote DOH server, if an ip was provided, it will just use that, if not, it will try to resolve using the local stub, but that may not work.... Just like an IP has always been provided to bootstrap which nameserver to use, here we need to also be provided a name so we can validate that the server we are talking to is the one we trust."} {"_id":"q-en-draft-ietf-emu-eap-tls13-a19d607bd7cf8aab270053ef0e641a360110e33c220bb1382358e74471763c13","text":"Update the draft to use the same spelling as TLS RFC uses. Relates to issue\nWhile reading draft-ietf-emu-eap-tls13-15 I noticed that section now uses which is the named used in but not in RFC 8446 itself. I first thought this was a typo, before a further look showed that this change was mentioned in IETF 110 . Suggestion: make clear which parts need to be looked up from RFC 8446bis. In addition to exporter terminology, the meeting slides above also discuss about usercancelled behavior on slide 10. For clarity, any updates from RFC 8446bis should be marked clearly.\nIt would be best not to have normative references that block on RFC 8446bis. Perhaps we can just reference that it is renamed from \"exportermastersecret\"\nSounds good. That should make clear where it comes from and that there's no need to file errata about unknown term and/or RFC 8446bis dependency. Taking another look at my comment about usercancelled behavior on slide 10; to clarify what I was meaning: The slides hint that clarifications in 8446bis are needed by EAP-TLS too. If so, this looks like a reference to RFC 8446bis too. Finally, one nit: the alert name in RFC 8446 is \"usercanceled\", not \"user_cancelled\" as in draft -15.\nI made a PR with explaining that the document use the terminology from RFC8446bis. URL Do we need to say anything about usercancelled? RFC8446bis might do several clarifications and corrections. I think we should only mention things that are directly relevant for EAP-TLS 1.3 URL Current plan for RFC8446bis seems to be to ignore closenotify and I assume require close_notify after. This does not seem directly relevant for EAP-TLS 1.3. Or?\nclose issue?\nYes, please close this. I think it's now clear where exportersecret comes from. Also, thanks for the pointer to TLSv1.3 discussion. I agree usercanceled followed by close_notify discussion is not relevant for EAP-TLS 1.3. It's seems to be TLSv1.3 specification and implementation level detail as opposed to something that EAP-TLS 1.3 needs to directly react on.\nThis looks good to merge"} {"_id":"q-en-draft-ietf-emu-eap-tls13-adbad78888b0bb8e7fdb3cbbd943ef068cfa14bf17cd9b3d61874bb4b4d1eb4c","text":"Hi! I performed a second AD review on draft-ietf-emu-eap-tls13. This time on -18. Thank you to the authors, the responsible co-chair, and the WG for all of the work to redesign the -13 protocol in response to the initial IESG review in December 2020. My review feedback consists of my own minimal additional comments and highlights of where previously stated IESG ballot items do not appear to be responded to or addressed. All items appear to be minor. ** Section 2.4. Previously, this text referenced TLS 1.3 or higher necessitating more ambiguous reference to version numbers. However, now the text only notes TLS 1.3. OLD When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP- TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) for the TLS version used. For TLS 1.3 the compliance requirements are defined in Section 9 of [RFC8446]. NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446]. ==[ COMMENT (Ben Kaduk) Section 2.1.4 The two paragraphs below replaces the corresponding paragraphs in Section 2.1.3 of [RFC5216] when EAP-TLS is used with TLS 1.3 or higher. The other paragraphs in Section 2.1.3 of [RFC5216] still apply with the exception that SessionID is deprecated. If the EAP-TLS peer authenticates successfully, the EAP-TLS server MUST send an EAP-Request packet with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. The message flow ends with the EAP-TLS server sending an EAP-Success message. If the EAP-TLS server authenticates successfully, the EAP-TLS peer MUST send an EAP-Response message with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. Just to walk through the various scenarios here; if the last message of the TLS handshake for a given TLS version is sent by the TLS client, that will go in an EAP-Response, so the server is then able to send EAP-Success directly. On the other hand, if the last TLS handshake message is sent by the server, that will have to go in an EAP-TLS EAP-Request, and so the peer will have to send an empty EAP-TLS response in order for the server to be able to wend EAP-Success? Do we need to have any text here to handle that empty EAP-TLS Eap-Request case? ==[ COMMENT(Alvaro Retana) Is the intention of the Appendix to provide additional formal updates to rfc5216? That is what it looks like to me, but there's no reference to it from the main text. Please either reference the Appendix when talking about some of the other updates (if appropriate) or specifically include the text somewhere more prominent. [Roman] As proposed above, please provide some forward reference. ==[ COMMENT (Éric Vyncke) I find \"This section updates Section 2.1.1 of [RFC5216].\" a little ambiguous as it the 'updated section' is not identified clearly. I.e., as the sections in RFC 5216 are not too long, why not simply providing whole new sections ? [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216) ==[ COMMENT (Éric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Regards, Roman\n==[ COMMENT (Éric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Eric is correct that US english i.e. and e.g. is usually followed by a comma. I corrected this in d9f5028 I also run a word spell check on the document. Word does not suggest removing any comma after \"but\" but rather insert an additional comma after a but. I think this can be left to the RFC editor. The word spell check caught two other grammar errors (s after plural, no s after singular) RFC 8446 does not use capitalization on post-handshake as that is just a name for a set of messages. I corrected these things in\nI Made a new PR\nMade a commit with NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446].\nSection 2.1.4 Changed the text to: The message flow ends with a protected success indication from the EAP-TLS server, follwed by an EAP-Response packet of EAP-Type=EAP-TLS and no data from the EAP-TLS peer, follwed by EAP-Success from the server.\nI added a forward reference to the appendix in the introduction\nNAME looks like you have got everything covered? are there any comments from Roman which still need to be addressed? should we submit a new version and ask for IETF last call?\nI have tried to do everything. Problem with the updates thing. The replace or amend strategy does not really work.... [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216)\nNAME We should likely not do to much changes. Maybe we can write something like Amends with the following text. TLS 1.2 specific details in RFC 5216 are replaced. Requirement in the new takes precedence over requirements in the old text.\nThis review was addressed in earlier submissions. closing"} {"_id":"q-en-draft-ietf-jsonpath-base-6f81aacdbe43416544af05b072f99c0f81552b8cb0b1532e582876168992b8d1","text":"Thanks for merging. I had a small fix to the caption of table 1 which is now .\n(This was original raised in and now being split out) (Michael Kay) want to confuse the JSONPath spec by going into detailed comparisons at the risk of contradicting the normative text. (Glyn Normington) (Michael Kay) It seemed to be important in 2007, while argumenting to have something like XPath for JSON. If nowadays the terminology used has changed significantly with XPath 2.0 and 3.0, we better leave that comparison table 2 out. I am quite passionless here. At the IETF 110 meeting consensus was raised that inclusion of XPath in the Appendix section would be the most appropriate way to reference to it.\nClosed in\nGreat improvement!Additions / Replacements of high value ... thanks"} {"_id":"q-en-draft-ietf-jsonpath-base-b7e72ef6e4ea83f8bea7a9c34fbdacfd842ef2d70866e12e87d9af7047337fae","text":"Lots of niggly details re blank space. Style consistency, some missing places (assuming we stick with the liberal approach). Allowing \"in\" before or after a newline or tab. Adding some legend to the only remaining table with XPath content in the main document.\n[ ] define a production , replacing [ ] go through ABNF and make sure blank space is allowed where it needs to be (Do not use the term \"whitespace\" any more.) Task \"T1\" in the document URL\nand\nFrom the document: (whitespace is now generally called blank space, but that doesn't change the work to be done.)\nand\nHow much do we want to reference XPath within the spec? I understand that XPath is the inspiration (and in the tagline that NAME used in his blog post: \"XPath for JSON\"), but I wonder if much of this can be left out of the spec. The argument for this is that we're defining JSONPath. It shouldn't assume any understanding of XPath or any other technology. It's not the job of the specification to convince readers to use it. It's the job of the specification to define the technology. I'm perfectly happy moving this into an external \"Understanding JSONPath\" website at some point.\nI seem to remember that last time we discussed this we agreed to move the introductory comparison with XPath to an appendix, with the possible exception of the overview table (but that could be duplicated).\nThe XPath inclusion was moved into an appendix.\nDo we allow spaces (outside of string literals)? If so, where is whitespace allowed? I think whitespace in query expressions should be allowed. This would fall in to the context of the . Maybe inside the union/ selector: Are there any other places where whitespace could be considered useful?\nIt may be convenient to allow spaces between matchers - to allow well-readable multi-line queries for complex cases.\nLooks like the allows whitespace internally, so that (with leading and trailing whitespace) is valid.\nClosed in and\nWhat was the resolution here? Where did we end up allowing whitespace?"} {"_id":"q-en-draft-ietf-jsonpath-base-7445bbe1e58fc447116e857df0fca5daaca10bf696811bd9e7024a71511bcf59","text":"From the meeting at IETF 112 ([1]): Consensus: 'in' won't be used, but it will be easy to add later if there's real demand [1] URL\nGood eyes. Thanks!\nYes, we agreed to remove the 'in' operator as some kind of syntactic sugar. thanks\nThanks! I pushed a small commit removing an \"in\" example."} {"_id":"q-en-draft-ietf-jsonpath-base-964bf5709b0809a98952ded1db05215c49f733a24290c5797274f6aa7ea4a7e6","text":"NAME It would be good to address NAME comments in this PR. I have made some suggestions, but feel free to pick them apart if you are keen to retain \"truthiness\" and \"falsiness\". BTW perhjaps I struggle with the latter term because of the existence of some .\nIndeed. It is not that easy to say here what needs to be said, so I'm still in thinking mode. Thanks for the pointer to English slang; of course I was completely unaware. We don't want to use the fooiness terms. Interestingly, the remaining grammar does not seem to have a place where JSON values are implicitly converted to Boolean; this is now all explicit unless I'm missing something. So maybe we don't need this table (and its legend) at all?\nYou're right! The two occurrences of are as follows. which simply tests for non-existence of a value(s) selected by a path and does not depend on the actual values themselves. which negates a boolean expression. Looking at the details of , there's no way it can result in a value which might need to be converted to boolean. So please can we scrap the table and kiss \"fooiness\" goodbye once and for all?\nI think we should first merge it and then delete it, in case we need to resurrect it later.\nwhat's about ... which can surely be converted to a negated form. But sometimes someone doesn't want to do that ... like in any programming language. Nevertheless I second to kick fooiness out. As a minimalist I am always a friend of kicking unneeded things out.\nComparisons already yield Boolean values, as does /.\nWell, then in fact I misunderstood Glyn's comment above ... I agree to remove the conversion table as superfluous."} {"_id":"q-en-draft-ietf-jsonpath-base-152de77850745f04dc83574eaec9e7b2c3bfc6a5b7cdfa171fff0984c5967a84","text":"Also add some general advice from URL See also . Ref: URL\nRecursion is not really the problem, stack depth is (or any other uncontrolled resource usage triggered by queries into deeply nested structures). Grüße, Carsten\nNAME Are you comfortable with this now, or would you perhaps like to suggest some other way to address the concerns of URL\nThank you.Looks good and informative to me."} {"_id":"q-en-draft-ietf-jsonpath-base-6d25b71a9afc260aca2ed973a8f4c7f6a63d68126d883fd2868c0c096fdaf90b","text":"See also . Ref: URL Fixes URL\nTo be honest I also considered a comment along the lines of \"couldn't we just say 'standard Boolean math'?\" But (another little -used math degree here) I found it pleasing and, on reflection, possibly beneficial for an implementor.\nAmong other things, it dawns on me that if I were implementing, I'd copy-paste all those lines into a unit test.\nThe proposed operator precedence in the draft is shown below. Precedence Syntax -- -- 5 (...) 4 ! 3 ==, !=,, >=,=~ 2 && 1 It is assumed that these precedences are motivated by existing languages, but it is also noted that operator precedence is different in different languages such as JavaScript and Python. For example, logical not has low precedence in Python but high precedence in JavaScript and C. The less than/greater than operators have higher precedence than the equality operators in JavaScript and C, but the same precedence in Python. Existing JSONPath implementations that follow these languages for evaluating filter expressions will also have these differences. This issue, created by request, collects the comparable operator precedences for Python, JavaScript, Perl, C And XPath 3.1, to illustrate where they're similar to the draft, and where they differ. Perl is included because of these languages, it is the only one that has a operator. Note that in Perl, the precedence of the operator is higher than the comparison operators. One comment: the draft operator precedence table should include an additional column for associativity, which is needed to fully describe the operator. The unary logical not operator should be shown as \"right associative\", and the binary operators should be shown as \"left associative\". Precedence Associativity -- -- 7n/a 6left to right 5right to left 4left to right 3left to right 2left to right 1left to right Source: Precedence Associativity -- -- 9left 8left 7chained 6chain na 5left 4left 3right 2left 1left Source: Remarks: Unary \"not\" is equivalent to \"!\" except for lower precedence. Binary \"and\" and \"or\" are equivalent to && and except lower precedence Precedence Associativity -- -- 6Left to right 5Right to left 4Left to right 3Left to right 2Left to right 1Left to right Source: Precedence Syntax -- -- 8() 7x[index:index] 6x[index] 5x.attribute 4in, not in, is, is not, , >=,<>, !=, == 3not x 2and 1or Source: Precedence Associativity -- -- 3NA 2Either 1Either Source: Remarks: XPath 3.1 and XQuery don't have a logical not operator, there's a operator, but it's a binary operator that means something different. Instead, XPath 3.1 and XQuery provide the function , whose precedence rules follow the rules for functions.\nSince logical NOT binds more tightly than logical AND which binds more tightly than logical OR in all but one of the above tables and since we don't support comparison of predicates, I'm finding it hard to think of a case where the semantics differ between these tables. Can anyone provide such an example?\nWhich one? They're all the same.\nOne might ask the question, if one were interested in these things, why is the draft precedence ordering slightly different from all of Python, C, JavaScript and Perl? Why not make it the same as at least one of them? There is no difference in implementation effort.\nI think we are still decoding this statement. Where is the difference?\nThe draft relational and equality/inequality operators are grouped together, like Python, but unlike C, JavaScript, and Perl. The draft logical not binds high, like C and JavaScript, but unlike Python. (Perl being Perl has two logical nots, one that binds high and one that binds low, but are otherwise equivalent.)\nLogical NOT is missing from the XPath 3.1 table.\nXPath 3.1 and XQuery don't have a logical not operator. Instead, they provide the function , whose precedence rules follow the rules for functions (not shown in my table.)\nIt may also be interesting to compare with the . In , like XPath 3.1, Boolean 'NOT' is a , not an operator.\nWe could at least document the associativity.\nAfter discussion, is there anything that needs to be changed in the draft aside from adding associativity specification?\nLGTM"} {"_id":"q-en-draft-ietf-jsonpath-base-40dda90b688357c64a7a9443227bde1e3b49b011615efcef51b4efb188489852","text":"This is probably a bit too wordy, but is a starting point for a sharper definition of the nodelist orders which can be produced by the descendant selector. Ref: URL\nI'm not sure of the context. Please could you point out particular lines. (The usual way to do this is to enter your review by clicking a \"+\" which appears when you hover the mouse pointer just to the left of a line in the \"Files changed\" tab of the PR. You can comment on a block of lines by clicking on the line number of the first line of the block and then shift-clicking on the line number of the last line of the block.) Meta comment: thanks for your review. I'm afraid it's non-binding. See on the mailing list.\nI created local comments as requested per NAME Well, let us keep that discussion in that list maybe? Actually, Stefan Hagen's kind approvals are not binding, so I will await your approval, Carsten, or someone else's with the appropriate privileges. (I could give Stefan those privileges, but that seems wrong!) In fact, I'm puzzled over who can approve PRs (in a binding way). The repository settings list four people with direct access to the repo (see the attached screenshot), but then there are cabo, timbray, and probably some other users with binding PR approval privileges. I'm not sure how we arrived at this set of people. Maybe someone will remember... On Tue, 12 Jul 2022 at 20:16, Glyn Normington wrote:"} {"_id":"q-en-draft-ietf-jsonpath-base-1f1defd1eaf85c4b367f555503ade1a15192c592c8446de5ef9fddd4ddbcdbe6","text":"Fixes URL\nThanks for your comments NAME and NAME I feel I should explain. The current draft defines the orderings and using the same text. For strings, the text talks about \"less than\" but not \"greater than\". So the current text implicitly assumes that for strings and , if and only if . Non-mathematicians would tend to say this is patently obvious. But this PR avoids that assumption by only defining using the same text and then defining in terms of . However, I take the point about the standalone definitions and the arbitrariness of defining one in terms of the other. So my question is: is the current draft text sufficiently precise? If not, I'll rework this PR to make it more precise. (When I started out doing that, it began to get quite wordy, so I changed tack.)\nDespite the fact, that Glyn's approach is mathematically clean, I would favour the symmetric approach by defining , ,. I am quite sure, that nobody will cry \"redundancy\" then.\nYes. Is that a problem? comes before ...\nActually, on the symmetry point, we were comfortable defining in terms of and that was asymmetric.\nIn what system of mathematics are operations ordered? This is completely arbitrary. I'm not convinced this needs to change. I prefer defining both and . And for symmetry, I'd change strings to match.\nApplying editorial discretion here. Merging.\nThe current spec defines string comparison using one of the operators , , , or in terms of when one string is less than another. That's fine for , but: for it effectively assumes is fixed (i.e. that for strings and if or ) for and it leaves quite a lot to the reader's imagination. Once we've settled on , the above should be addressed. ~We should also adopt a similar approach for ordered comparisons () of arrays if is adopted.~ (It wasn't.) (I labelled this issue as because I don't believe it will change the intended semantics in either case.)\nI propose to let URL land before addressing this issue.\nI personally prefer the previous version, with standalone definitions, but think this is a matter of editorial discretion.I agree with Tim. It's fine and correct to define in terms of other operators, but I feel and should both be equally defined. Defining one in terms of the other means you have to make an arbitrary choice of which is directly defined and which is indirectly defined.This PR is exactly the right way to do this."} {"_id":"q-en-draft-ietf-jsonpath-base-8cc9c2a299d1690d49d2199dd938f65da53a872827b32f92a235149043972470","text":"I welcome this ... thanks.\nFrom the side line: Do we need to add description of behavior (with a qualifier) to motivate the truly welcome attributes of the ?\nA Normalized Path is just a particular kind of JSONPath query. Its only behaviour is to identify a node within the tree of nodes in a value. Or did you have some other behaviour in mind?\nOh, of course, but that formulation had never occurred to me, and it's useful. If it's not in the spec, it should be.\nI meant the first of the two sentences: Here we describe the behavior of an implementation (output as response to an unspecified request that triggers the former). I just wonder if this section is a good place to describe a part of a protocol, when the topics are a) the \"projection\" itself and b) potential benefits (all from a data perspective).\nWell, we describe the nature of what it is to implement JSONPath. Generally, in a protocol, you not only need to prescribe what is on the wire, but also what it means. That is only ever in the implementation. That doesn't mean that it describes the behavior of an implementation. E.g., in TCP, the byte stream that is transported by the protocol only ever happens in the implementation. Nothing on the wire reflects this byte stream. But describing that byte stream as the underlying model of TCP is rightly not criticized as \"describing the behavior of an implementation\".\nThe penny has just dropped: you are questioning the need for such a description rather than asking for more of the same! I agree with NAME response.\nI think an important aspect that this is missing is that normalized paths are contexted to the JSON data that was queried; whereas singular paths exist independently, as does any other path. For example, the path (used in the text) is singular and can rightfully exist on its own. However it can't be normalized without querying JSON data. Once queried, the resulting normalized paths can be different based on the data. yields a normalized path of for the value yields a normalized path of for the value .\nThis text isn't getting into implementation behavior at all. It's specifying output and allowing for options.\nWell, unforced ambiguous behavior then. Specifying output and allowing for (unrequested) options. For instance when the SomeTool processor outputs a number 42 and the options are that it represents any base in which the symbols are generally valid (per consensus) to me would rather raise questions than answer them. I would expect these complication (extra dimensions) less inside a format but more in a \"processor\" role or protocol description. The latter describing expected behavior within the dynamics of some interchange. I used the term without caution. Sorry for that. In my reading of the sentence we discuss about, the consumer of such a result has to look elsewhere to find out what the processor provides as result, right? That is our normal task as tool users. However, I think we better try to not bring in such ambiguities without noting the reason outside of our mandate to change. Maybe note, that existing processors (or however we like to call these providers of jsonpath responses) display mixed behaviors (what you call options). Makes sense? If not, sorry for the noise.\nWell, we do say: I wouldn't go so far to say that the Normalized Path is \"contexted\". You need the value to compute the Normalized Path from the query, but after that, it doesn't require context to apply it (well, they are meant to be applied to that value, but they are regular JSONPath queries). With IETF 115 raging and Glyn being offline for a couple of days, we'll probably not respond very fast the next few days; I think good points are being made here even it is not always clear how to modify the text further to fully reflect them.\nI believe the PR now makes this property of Normalized Paths clear. (I'd prefer not to labour the example.)\nNAME I have attempted to address your concern by making the following general statement (suggested by NAME Normalized Paths. A nodelist is defined as follows (in the Terminology section): From this, it is clear that an implementation of JSONPath may output a nodelist as a JSON array of strings, where the strings are Normalized Paths. So I have deleted the following statements, which seemed to be causing confusion: Normalized Paths instead of, or in addition to, the values identified by these paths. For example, the JSONPath expression could select two values and an implementation could output the Normalized Paths and . (I'm not sure whether this will be acceptable to the other reviewers, so I'm going to leave the PR open until they have had a chance to review the latest version.)\nNormalizing paths? What does that mean? There is no concept of \"normalizing\" arbitrary paths, it's pointless, there is no suggestion of that in Goessner's original article, or anywhere else that I'm aware of.\nNAME Not sure if you've seen the final form of this PR, but I'm going to merge it to close off the discussion threads. We can adjust the section later if necessary.\nURL begs the question of whether nodelists should have a normalized form. This would improve the interoperation, predictability, etc. of JSONPath implementations that output nodelists. Without this, JSONPath implementations which output nodelists would likely choose multiple different ways of representing nodelists. The proposal is for a Normalized Nodelist to consist of a possibly empty, whitespace separated list of Normalized Paths as in the following ABNF: Note: this proposal does not provide a normalized form for output from JSONPath implementations that output both nodelists and values.\ngood approach ... I would prefer a comma separated list, being able to be consumed as a JSON array without further modification ...\nohh ... and the URL as JSON strings, i.e.\nI don't think we want to do this. This is not a JSONPath query, but a structure built from that. I'd like to leave structure building to JSON entirely. We are not defining the syntax of JSON, and we should avoid falling into the restatement antipattern trap. So yes, a nodelist might look like , but not because we say so, but because that is JSON syntax. We would simply say nodelists are canonically represented as JSON arrays of strings, where the strings are normatlizedpaths.\nyes ... that's even better and pragmatical.\nWhy do they exist? The spec says nothing on the subject. If we don't have something convincing to say, why not remove them? I know we discussed this before, but as I was reading this with fresh eyes I was thinking \"There's a lot of stuff here but why bother?\"\nThe normalized path was to be used in the output. I don't think there was any other use for them. We wanted to ensure that the output was consistent (primarily for testing/comparison purposes), so we devised the \"normalized path\" to ensure that paths were output in a known format. After the refactor, it basically just means that the path doesn't use the shorthand notations; all of the segments are bracketed.\nThe following sentence in the Normalized Paths section provides some motivation: but maybe we need to expand it along the lines of NAME comment above.\nYes, that's my recollection. Yes, Normalized Paths use only bracket notation, but bracket notation can also produce paths which are not Normalized, such as and .\nI repeat my basic question: What are these things for and what benefit do they offer, and to whom? I'm not sure what \"used in the output\" even means. It seems to me that more motivation is needed. I'd also be OK with just removing the whole section.\nHere goes... A Normalized Path plays the same role as a JSON Pointer in situations where JSONPath syntax is required instead of JSON Pointer. A Normalized Path identifies a node in a given JSON value. Given a JSONPath expression which identifies a particular node in a given JSON value, there is a unique Normalized Path which identifies the same node in the given JSON value. Software (applications, libraries, JSONPath implementations, etc.) which outputs Singular Paths identifying nodes in a JSON value can output the equivalent Normalized Paths instead. A benefit of doing this is that the behaviour of the software is then more predictable and more easily tested (because the tests need to accommodate fewer possibilities). If there are multiple implementations of the software, sharing a single test suite is made more tractable. Who does this benefit? Human or programmatic users of the software observe more predictable behaviour. People writing tests have less work to do. Sharing a test suite avoids people duplicating effort.\nNormalized paths are introduced in Stefan Goessners's , and are implemented in the two most important implementations, Goessner JavaScript JSONPath, and . I assume Stefan Goessner can speak to the original motivation, and if you want to know how they are used in practice, I suggest surveying users, perhaps starting on the Jayway site. Among other uses, they are helpful for diagnostics, for discovering what a perhaps non-obvious query resolves to in terms of the individual nodes selected. They have other uses for selecting specific nodes given a much larger query.\nI think the spec needs a statement with a MUST/SHOULD/MAY about supporting these things and a clear suggestion as to what useful purpose they're designed to serve. Otherwise I seriously suggest removing them because to be honest I can't think of a single scenario where I'd use them.\nI think they're intended not to be written but consumed. I agree in that I doubt anyone would write out except maybe in the context of a stackoverflow question or something. They'd just use . The point of this, though, is to be able to declare a preferred syntax that implementations should use to identify a known location in a document. It's just saying that we prefer over . (One advantage of using the bracketed syntax for normalization is that any location in a document can be written in this syntax, whereas there are restrictions on the dot syntax.) Again, I think this is most apparent in output when the implementation is returning paths to locations instead of / along with the values.\nAnother consideration is the spec's definition of nodelist as the output of applying a query: The following statement in the Normalized Paths section: steps into implementation detail. In the rest of the spec we studiously avoid prescribing implementation details. One way to reconcile these would be to put some non-normative text near the start of the spec which talks about implementation options for nodelist. This could then include language such as \"If a nodelist is represented as a list JSONPath expressions, these SHOULD be Normalized Paths.\". (But would it make sense to use \"SHOULD\" in non-normative text??)\nI don't think that's implementation detail. It's defining what the output may be. We can talk about this in another issue if it concerns you, though.\nI would say it's a statement about the representation of the abstract nodelist. A list of Normalized Paths is a string representation. Other possible representations are lists of JSON values in string form, a programming language list of nodes, etc. The choice of representation is an implementation decision. Feel free to raise a separate issue if you'd like to change the current text.\nNormalized Paths ... are a concrete string presentation of nodelists (JSON text). are already used in the spec within every example table (Result Paths). are meant as an additional / alternative output of a JSONPath query. enhance predictability and testing. might (will) be used as input for subsequent analysis / actions (removing of duplicates?). Think of piping in Unix. Especially with the last point standardizing them is essential.\nCould we include a bit of Stefan's argument somewhere in the introduction to the section?\nDone - see URL\nDefinitely the right direction. Thank you!"} {"_id":"q-en-draft-ietf-jsonpath-base-1bfbba7af0cf81fc29dde8d14df37513a5280d669c49578891cf6e8427240ec0","text":"We wanted to add examples, but maybe we can merge this as is. Marking as ready for review.\nLet's get this in and polish it later."} {"_id":"q-en-draft-ietf-jsonpath-base-b36b7ff82a277f72d8a61702e2f64519f1b7ee7ab7ccd2223640afc52e3c8c06","text":"Add the necessary internal references.\nCurrently, the existence test description contradicts the intended use of OptionalBoolean functions. We need to be careful if we want to make a special case for OptionalBoolean functions and not open the door to any boolean value being treated similarly.\n2023-01-10 Interim: Fix text to properly handle OptionalBoolean. Make sure to indicate that an existence test from a path never gets to the actual value.\nCan this issue be elaborated? What is OptionalBoolean? How does this apply to functions? Examples would be great.\nBase 09 explains what OptionalBoolean is and uses it in the match and search functions. The current issue will also add some text to explain the way OptionalBoolean functions behave in filter expressions.\nE.g.,\nI propose a slightly more compact naming of ABNF of - and -segment, which should be more consistent between them two. is well known from LaTeX and means or there.\nabove is wrong (and so it is in the current spec). It needs to read ... to not allow triple dot ( :-)\nNAME Wouldn't dotdot-wildcard etc. be less cryptic?\nHere are some quibbles with the current ABNF (prior to PR being merged) which we can sort out after The Great Renaming:\nYes, yes. See. Fixed in I like the way this looks like now. Maybe I don't understand; can you elaborate? Fixed this (and following) in Removed in To be fixed in overall overhaul, next Yes! To be fixed in overall overhaul, next To be fixed in overall overhaul, next Fixed in Discuss this in its own issue, I'd say. Fixed in It certainly does not -- it is the subset that is actually needed for normalized paths. To stay normalized, we don't allow random other ones.\nOf course ... I'm quite dispassionate here.\nPlease see my proposal in\nWith merged, we still have the expression cleanup the separate issue (enhancement) whether should be allowed, to mean\nMy remaining points are covered by PRs , , and .\nShouldn't be named according to all other above ?\nYes, I agree. See URL\nNAME Are we ready to close this issue now the PRs have been merged or did you have more expression cleanup in mind?\nThe only thing that comes to my mind is standardizing on the column where the is -- this is currently rather chaotic. But that can be a new issue.\nIndentation of : number of lines such indented\n(And it doesn't need to be consistent, just a bit less chaotic.)\nLet's make the a new issue. (I haven't got a good feel for the optimal solution.)\nBefore we close this issue, URL slipped through the net in my earlier PRs.\nThen could resolve to either or . We would introduce an ambiguity, we already found with Json Pointer () ...\nOK, with the latest blank space realignments that are part of (5e2cc85), I think we are done with this.\nOk, let's refactor the ABNF later. ... or do we need two selectors here (Ch.2.61) child-longhand = \"[\" S selector 1(S \",\" S selector) S \"]\" given that '1' means 'at least one'."} {"_id":"q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20","text":"XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!"} {"_id":"q-en-draft-ietf-jsonpath-base-429fcb03096bb7e30a4e48323f1444485d15ee01aa6cee54d1903df6620176ce","text":"Sometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nAfter merging in my changes from main, it appears some Singular Queries still need fixing."} {"_id":"q-en-draft-ietf-jsonpath-base-b181ce1b26de08150f3698e73bd0868a555b466ca512cd919464250a15b7d456","text":"Also, delete a term which is used only in the definition of the next term. Fixes URL\nI only found the two occurrences you suggested changing. If there are others, please let me know!\nSometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nThanks."} {"_id":"q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c","text":"Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR."} {"_id":"q-en-draft-ietf-masque-connect-ip-30df03b90d10a22ea9629753a36c08f400a0120ece06eabbe8a71865d729f326","text":"I'm going to guess that order is not significant. But imagine that an endpoint would only honor a request for N addresses but the requester asks for N+Y. The endpoint has to pick a subset somehow.\nWhat do you mean by \"is the list ... significant\"? I don't understand the comment about subsetting either --- if you ask for N+Y addresses and get back only N, you have N usable addresses (ranges)\nHow does the other end decide the which N to pick? Without declaring if the request order is significant, then there can be no expectations of behaviour and we might find interop problems if assumptions don't align.\nIf you request, say, 4 specific addresses, and the other side can satisfy 3 of the requests because they're still available, it will include those 3. It's implementation-defined it if wants to provide a 4th address that is not the one you requested, or NAK it. However at no point in time was order significant --- you asked for all of these addresses, not any of these addresses.\nFriendly amendment: it may include those 3. It could also choose to only assign one, or a different address entirely if the implementation was so inclined.\nYou are pedantically correct :). I was trying to give a specific worked example.\nThis design seems reasonable. It might help to state clearly somewhere that the value of the request ID has no significance other than as a unique handle, and that the ordering of and in a single capsule has no significance. This is in contrast to the ROUTE_ADVERTISEMENT where the ordering of things is very significant.\nThat's a good suggestion Lucas.\nMy intention when writing this was for the order in the lists to not carry semantics, and similarly for the request ID to only exist for unicity. We should spell that out clearly. One thing we could do however, is saying that request IDs in ADDRESS_REQUEST MUST be increasing, to make duplicate detection easier on the receiver. If we do it, it adds a constraint on the sender and means that the receiver will check, while if we don't add the constraint then receivers won't bother to check. Any thoughts on this point?\nI have no strong feeling about the potential to add such rules.\nNAME can you propose text here?"} {"_id":"q-en-draft-ietf-masque-connect-ip-7db94b9c97cc68e364e99fa6f426462ec55c733a5d327a68377d6f43a7e5eb6b","text":"Say for example that A and B have the same IP Version and they have the same or overlapping Start IP Address / End IP Address ranges. A's IP Protocol is zero (the wildcard) whereas B's is non-zero. That causes a weird kind of overlap even though all the requirements are followed. Is it worth disallowing in the spec?\nEarlier in this section, the version is defined as such: I think then this is OK, since zeros aren't allowed in route advertisements. NAME that sound OK?\nThe IP Verson cannot be zero in the ROUTEADVERTISEMENT, but the IP Protocol can: The following ROUTEADVERTISEMENT follows the requirements (the IP Protocol of A is lower than the IP Protocol of B), yet there is an overlap:\nOh, I see, I misread. I do think that it makes sense to put the wildcard first — so 0 shows up before others. And then the more specific protocol rules can potentially have different ranges. I still think this is acceptable to leave as-is."} {"_id":"q-en-draft-ietf-masque-connect-ip-8033d028d57615439370d3d4428d6f2e949e5d8b7906984234b288d394d49534","text":"From Vinny Parla on the :\nI don't think we need to make normative changes to the protocol, but adding a note for implementers that they might want to treat extension headers differently that other IP protocols sounds like a good change. Some additional review comments from an internal review that I wanted to share with you. Section 4.6 has a minor issue as the IANA registry mixes IPv6 extension header with layer-4 headers. It should also specify the use of RFC 5952 for representing IPv6 addresses. However, this can be ignored for now. Section 4.7.1 unsure whether \"IP Version\" field is useful as it can be derived from \"IP Address\" length/notation, also not critical but perhaps this would reduce unnecessary information. There is also a minimum 68-byte MTU for IPv4 ;-) - not sure that is important, but wanted to share. The recursive DNS server info (found in RA and DHCP), is not present. Was this an intentional decision or perhaps something that was not considered for some reason? There is no discussion of link-local addresses / ULA (I would expect both end of the 'tunnel' to have link-local addresses), or link-local multicast (e.g., ff02::1). Or perhaps the IP proxy send an IPv6 RA ? or should be prevented from sending one ? Should IP proxy reply to ff02::2 (all link-local routers), ... -Vinny"} {"_id":"q-en-draft-ietf-masque-connect-ip-029102dbb7d81d23e9d64a1d130ba8d051fb0d5a67795f23b5d1c5ed8c602f2d","text":"From Vinny Parla on the :\nConnect IP follows the same . We should probably spell that out somewhere. Similarly, we should remind implementers that they may need to treat link-local differently to avoid forwarding it, and that they may need to ensure they properly handle link-local multicast. Using ULAs is deployment-specific so I don't think anything needs to be said here: ULA isn't different from other IPv6 address blocks and we decided early on that addressing schemes are out of scope for this document Regarding RAs, I don't think anything needs to be said in this document since some deployments might want them while others might not Some additional review comments from an internal review that I wanted to share with you. Section 4.6 has a minor issue as the IANA registry mixes IPv6 extension header with layer-4 headers. It should also specify the use of RFC 5952 for representing IPv6 addresses. However, this can be ignored for now. Section 4.7.1 unsure whether \"IP Version\" field is useful as it can be derived from \"IP Address\" length/notation, also not critical but perhaps this would reduce unnecessary information. There is also a minimum 68-byte MTU for IPv4 ;-) - not sure that is important, but wanted to share. The recursive DNS server info (found in RA and DHCP), is not present. Was this an intentional decision or perhaps something that was not considered for some reason? There is no discussion of link-local addresses / ULA (I would expect both end of the 'tunnel' to have link-local addresses), or link-local multicast (e.g., ff02::1). Or perhaps the IP proxy send an IPv6 RA ? or should be prevented from sending one ? Should IP proxy reply to ff02::2 (all link-local routers), ... -Vinny"} {"_id":"q-en-draft-ietf-masque-connect-ip-17548e51f49352ffd95214fd1b9d80f09f15bf21f3e9950a019968def7c6e790","text":"Good point, added. Note that I've also added an important note about how ROUTEADVERTISEMENT and ROUTEWITHDRAWAL coexist, please take a look NAME and NAME .\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think the capsules description needs to be explicit if multiple capsules are allowed. I think there are different considerations depending on what type of capsules are used. The authors have already discussed and concluded that multiple routeadvertisement will be needed to allow to declare reachability of multiple prefix. I don''t know if multiple addressassign capsules are allowed. Allowing multiple allows multiple address ranges to be declared. However, that results that we also get the question if one need to withdraw source address prefixes..\nYes, multiple capsules of each type are allowed. It does seem useful to clarify this, and what it means to receive multiple addresses and multiple routes (which are additive). NAME can you help write this?\nAgreed, multiple capsules of all currently defined capsule types are allowed. For example, multiple ADDRESS_ASSIGN are needed to support a dual-stack tunnel. I'll write a PR."} {"_id":"q-en-draft-ietf-masque-connect-ip-434ad78b41a20b70f204e64524297b4b114898bdb9c99d00941cbbb54b767dcd","text":"This PR is one potential way to It simplifies route exchanges by going with NAME 's suggestion of sending the entire routing table each time something changes. It could be somewhat inefficient in some cases, but it becomes easier to reason about than what's currently in the draft when overlapping prefixes are involved.\nThis is interesting, but I'd prefer to wait to do this until after we post a -00 version. NAME is that OK with you?\nI'd prefer we figure this out before -00, because this would be a pretty significant non-compatible change and I'd like to implement -00\nNAME you nearly convinced me in the call that we rather want to announce ranges separately but I forgot why. Can you outline your reasoning again?\nNAME my argument was that we shouldn't have two different ways to set routes, one that gives a table and one that gives a single route. If we go to only have a table, then it's fine.\nI know saw the discussion in and support this approach. Also if we agree on this approach, I also think we should merge it before we submit -00\nNAME I think we're ready to merge this PR\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think this looks good enough to include in the -00."} {"_id":"q-en-draft-ietf-masque-connect-ip-f49bcbb154904b7c8d769576163869b007de480a771581a16292bed42adf5772","text":"Instead of just an address / hostname, could we ask to open a tunnel to a prefix?\nTommy, can you please clarify the reasoning why this functionality is useful. Specifying the target as a subnet would to my understanding compared to a general IP tunneling address just restrict that the client will never try to send IP packets to any IP addresses outside of this subnet. And thus potentially enable serving multiple masque clients using restricted targets to share the source IP address. Is that the intention? Or is the intention here to have the masque server potentially redirect to another instance based on the target in the request?\nThis came up in a discussion—if we let you ask to connect to a specific hostname or address instead of all addresses (as we do today), we wondered if there was any need to connect to a subnet of addresses. This would be for connecting to an IPv6 /64, which is unique enough for a device, and doesn't require needing to assign an IP address exclusively to the client that is issuing the request.\nI'm supportive of adding this if we can come up with an encoding that's not a pain to implement. For example slash with prefix length seems fine to me - the slash would be percent-encoded just like the IPv6 address colons are.\nIn relation to my just raised issue on source address validation if we add this functionality, I would think that the proxy would need to do destination or source address validation depending on direction of traffic to enforce the signalled aspect."} {"_id":"q-en-draft-ietf-masque-connect-ip-15aac0ad9ca8c6fe1921c82bc6a9c5e1b6c2aa94a7646f82f4929261d0b38cc4","text":"I think there are two option here, 1) close/deny the connect request, and 2) use streams instead. I think we should consider both option, especially given stream are just there to use.\nIsn't this a dup of ? That's fixed with . I guess the text could be generalized to talk about more than just IPv6 MTU issues.\nNo, this aspect on what you do when the MTU required is not present is not resolved by that PR. I would note that closing gives the endpoints one experience that is likely easier to handle in implementation and do avoid the issue with reordering and the varying packet treatment that moving to stream would result.\nI would suggest adding a sentence saying that if an endpoint realizes that the IPv6 tunnel MTU goes lower than 1280, then it MUST close the stream.\nIETF 114: Note that if something happens and your MTU gets smaller, you may need to destroy that link. Potentially make this option a SHOULD do PMTUD, otherwise you could end up in a situation where you're not a compliant IPv6 tunnel. Proposal:\nEven if PR was an improvement to the description of the IPv6 minimal link MTU issue the text still leaves an open issue in that it doesn't make clear which of the endpoints that are responsible for HTTP intermediaries or if is truly both which has another implication. So a Connect-IP endpoint will be aware of the HTTP datagram supported MTU and if its HTTP/3 connection can support sending at least 1280 bytes of IPv6 packets without fragmentation. However, if there is two or more HTTP intermediaries in between the client and the server then there are HTTP intermediaries that might have an HTTP/3 connection which has less than 1280 bytes MTU for HTTP datagram payloads in-between the client and server. Thus, arises the issue of how to detect this. PR states: if HTTP intermediaries are in use, CONNECT-IP endpoints can enter in an out of band agreement with the intermediaries to ensure that endpoints and intermediaries pad QUIC INITIAL packets. From my perspective the big question is how a client or a server knows that there are intermediaries, or the most important aspect, that there are no unknown HTTP intermediaries. To my understanding both the client and the server can potentially bring in HTTP intermediaries unknown to the other part. Thus resulting in a HTTP intermediary to intermediary connection where neither endpoint knows if the requirement is fulfilled. Thus my interpretation is that the only way to be sure that it works it to actually execute on the third bullet in the PR: CONNECT-IP endpoints can also send ICMPv6 echo requests with 1232 bytes of data to ascertain the link MTU and tear down the tunnel if they do not receive a response. Unless endpoints have an out of band means of guaranteeing that one of the two previous techniques is sufficient, they MUST use this method. I would also note that this only verifies from the acting endpoint towards its peer, not the reverse path. For that to be verified the peer need to send ICMP probes the other direction. Thus, my conclusion is that Connect-IP still don't ensure that the minimal MTU is fulfilled.\nIsn't the quick-fix to require that the ICMP reply also be 1232 byte sized?\nNormally I would say now, as you want to ensure that the reply makes it back that shouldn't be padded. Also if the response is anything else than echo reply you also should not pad. However, I think the implication is the same either way. Both masque client and server need to send ICMP or modify its ICMP processing. Given that I would think having each endpoint doing independent processing of its forward path might actually be simpler and enables using a standard off the shelf ICMP implementation and not have to modify it.\nOur implementation has userspace ICMP request/response handling for echo request/response, it wasn't too hard to write. I don't propose modifying anything, but the structures were ~50 lines of code to set up. I agree we shouldn't be modifying responses we did not generate.\nI think the existing text already is generic to \"endpoint\", not client or server: So this implies that both sides need to send the PING if they haven't guaranteed the MTU. This seems like this is already handled?\nIf both endpoints are required to send ping it would be resolved. However, that is not what is written in the draft currently. What is written is the there are various techniques, and if an endpoint choses something else than verifying its forward path using ping, it becomes unclear who is actually responsible for any HTTP intermediaries. Which results in an unclear requirement and very likely failure of the MTU verification resulting in an MTU black hole.\nWell, it does say that endpoints MUST use this method unless the already have other coordination. I could see some clarification text to say that the guarantee for the other methods must include knowing there isn't an intermediary — which is actually pretty easy to guarantee in many cases.\nThat is what I intended when writing , but perhaps I wasn't clear. Can one of you propose a PR that would clarify this?\nAs per , the IPv6 minimum link MTU is 1280 bytes. However, states that it is possible to have a functional QUIC connection on a path that only allows 1200 bytes of UDP payloads. Such a path would only allow HTTP Datagrams with payloads up to 1180 bytes. If we only send IPv6 packets over QUIC DATAGRAM frames, such a setup would result in violating the IPv6 minimum link MTU. We should describe this problem and how to react to it. Possible solutions include failing the CONNECT-IP request, using the DATAGRAM capsule, or different solutions altogether.\nThis is only a problem for the end-2-end QUIC handshake packets, right? I was assuming that you either have a QUIC tunnel that has a larger MTU in order to use datagrams or you have to use stream-based transmission for the e2e handshake and then need a way to switch to datagram later. In any case we need to say more in the draft!\nFrom the most recent session, I think we may have consensus to address this by noting that CONNECT-IP over H3 requires a minimum link MTU of X bytes on every HTTP hop for IPv6 compliance, leaving support for smaller links to an extension.\nI'm not sure the consensus was clear, but I think it's upon us to make a proposal and see if it flies =)\nThat sounds like the best path forward.\nI propose a simple solution that should be enough for most use-cases: if any endpoint detects that the path MTU is too low, it MUST close the tunnel if the tunnel had IPv6 addresses or routes on it. Endpoints are not required to measure the path MTU. endpoints that wish to use IPv6 SHOULD ensure that their path MTU is sufficient. For example, that can be guaranteed by padding QUIC initial packets to 1320 bytes.\nThat seems reasonable to me as a proposal!\nSo how does the endpoint determine that the Path MTU is too low? In the scenario that looks like this C - I1 -I2 - S and it is the I1-I2 path that can't support 1280 byte datagrams. The I nodes are HTTP intermediaries. The client will not know that the datagram MTU size is too low on these nodes, to my understanding the datagram payloads will simply be dropped by the I nodes in either direction.\nIn that scenario, if the endpoint wants to check it can send an ICMPv6 echo request with 1232 bytes of data - if it receives an ICMPv6 echo reply that means that the path supports the minimum IPv6 MTU.\nNAME then we are making it mandatory to send ICMPv6 echo to the external address assigned for the client, and for the server sending one to the client, this to determine that minimal 1280 bytes are working over MASQUE. Because to me there are no way the MASQUE endpoint can determine in all cases if there are HTTP intermediaries or not. Else we have to put requirements on the HTTP intermediaries to understand MASQUE, and to my understanding what was not desired. And maybe using ICMPv6 is the easiest method with the lowest implementation burden. The issue I see with this aspect is that many applications when establishing a Connect-IP path might not want to wait for the ICMPv6 response before sending other traffic so we have a situation where there is a small, but non-zero risk that these packets goes into a black hole, and then the ICMP response comes back and the MASQUE Connect-IP stream will be closed due to the response to indicate the failure. I think this is all manageable and not different from failures that can occur for other reasons.\nI don't see why this should be mandatory. When I plug in an Ethernet cable into my computer, I don't know whether the IP stack on the next router spuriously drops packets above 1000 bytes long, and my OS has no requirement to measure the path MTU.\nBut for the said router dropping IPv6 packets below 1280 would be a protocol violation. The issue here is the interaction with the HTTP intermediaries and what is under the control of Connect-IP function. So Connect-IP basically provides a IPv6 capable link. Per IPv6 RFC that requires us to support 1280 at least. What have been the rough consensus so far is that we are not defining an fragmentation and reassembly mechanism, or rather how to ensure that one uses QUIC streams when the transport is HTTP/3 for below 1280 IPv6 packets and instead reject the service. Thus we are in the position of having to answer how does one know when to reject the service. The alternatives I see are: 1) Force the Connect-IP endpoints to ICMP ping each other to verify that the path supports 1280 IPv6 packets 2) Have the HTTP intermediaries be Connect-IP aware to the level that they can reject the HTTP request if the path does not support 1280 packets. Where 2) have the downside that we still likely have some black holes in cases where the HTTP intermediaries are not connect-IP aware and ignores the requirements. Where 1) have the downside that the endpoints request can't be rejected during the request response sequence and instead have to be closed with an error message indicating the paths failure to support the packet.\nI'm opposed to requiring ICMP pings because the majority of use-cases will not have this problem. How about we add text like this: CONNECT-IP endpoints that enable IPv6 connectivity in the tunnel MUST ensure that the tunnel MTU is at least 1280 bytes to meet the IPv6 minimum required MTU. This can be accomplished multiple different ways, including: Endpoints padding QUIC initial packets when it is known that there are no intermediaries present Endpoints sending ICMPv6 echo requests with 1232 bytes of data an out of band agreement between one of the endpoints and any involved intermediaries to have them pad QUIC initial packets\nSo I understand your desire to not add complexity anywhere for a likely seldom arising case. However, I do fear that not addressing this with a clear assignment on whose responsibility it is to avoid the issue will result in that none does anything of it. Because an endpoint that uses HTTP/3 can verify its first hop. However, the complexities arise when you have HTTP intermediaries in the mix.\nI get that, but since the complexity is added by intermediaries we just need to make it clear that the intermediary needs to not be silly\nOne part is to detect if the link is capable of 1232 bytes, however, if not, we also not to provide more guidance what to do. I think there are two options: either close the connection or use quic streams instead of data datagrams."} {"_id":"q-en-draft-ietf-masque-connect-ip-c2a08b997896e4baae5cc9483d0193174157ab9a79bee53bb6ed0065eb2ef8e5","text":"If one is using IPPROTO to select a particular protocol to be tunneled, one may still received ICMP in the context of this request. To my understanding there are two reasons for this. a) ICMP generated by the Connect-IP peer endpoint b) ICMP sent back by node beyond the Connect-IP that is directly related to forward traffic sent by this endpoint and which the tunnel endpoint maps back to the Connect-IP request. I think we could add a clarification of this in , or we could do it later.\nTotally agree. I added fb1578626de4d3073ac31f5cb89c06efae2ebc51 to address this as part of .\nBased on the discussion in issue we recommend to use ICMP for error signal. ICMP works for MTU issue or destination unreachable, however, there is no appropriate message type if the IP header of the packet provided by the client contains an IP that wasn't assigned to the client. So we need a different kind of error handling for this case?\nI think ICMP works for this, I'd send See . Would that work?\nI guess that could do it. Should we be more specific in the draft? Also, we probably would need to be more explicit in the draft that you should request a scope for your CONNECT request that allows you to received ICMP messages.\nCan you write a PR for this please?\nIETF 114: Recommend including ICMP if providing routes to the peer, and clients should always be prepared to receive ICMP. Also add note that ICMP may have other source/destination addresses so as to not surprise endpoints.\nAlso look at RFC 4301 for some potentially helpful text.\nWhere is the ICMPv4 detail?"} {"_id":"q-en-draft-ietf-masque-connect-udp-0d1c7ef449fdf80fd58ccea70e8c27e1efdb593c4003811f9f933081cf08a367","text":"People shout at me for doing this, so I'm paying it forward. Perhaps you could move the SHOULD requirement to a statement in Section 3, so that it covers all forms of the CONNECT-UDP request."} {"_id":"q-en-draft-ietf-masque-connect-udp-2e8029568d4b12a9562df6bd593d218dd2a8136ebb86712c7005328b8e318e03","text":"In section 5 in this sentence the term \"proxied UDP packets\" is used the only time and might not be clear: How about to just say \"UDP packets in HTTP datagrams\"?\n+1 good improvement"} {"_id":"q-en-draft-ietf-masque-connect-udp-60799f0db06f2b323d1cdcf58f1d21bc54ef39b218fdcddab7b9c5e21f2c12d3","text":"Thx!\nShould we maybe add a recommendation to the sec considerations that proxies may limited the amount of data they send as long as they don't see any reply. I guess initially you would limit to 10 packets and then following RFC8085 limit to one datagram per 3 seconds...? Not sure if that really is a good idea but I wanted to bring it up for discussion...\nI thought about that when I first wrote the existing text, and I came to the conclusion that a limit here wouldn't protect against real attacks. If someone is attacking a closed UDP port, they're not accomplishing anything. If someone attacks an open UDP port (for example 443 because they want to attack someone's QUIC endpoint), then the target will reply with some sort of UDP packet (for example a QUIC RETRY) so this protection will disable itself. Does that make sense?\nThe challenge I see with any advice is that the proxy is not an application using UDP, it's just passing opaque data. So it will really struggle to rationalise at all about the various UDP use cases that clients have for the proxy. Perhaps rather than focus on advice, focus on how CONNECT UDP proxy considerations differ from CONNECT TCP proxies. That may well cover, at the same time, considerations like how long should a proxy hold open requests (tunnels) that don't exchange any packets in one or both directions i.e., a DoS attack on the proxy itself via a slow loris kind of mechanism\nThat's discussed in the Proxy Handling section, it references BEHAVE for how long sockets should be kept open\nNAME would it be worth adding your reasoning (which makes sense to me) to the draft somewhere?\nfair point, I'd read that and forgotten about it, maybe worth a back reference from the security considerations. For instance, HTTP/2 has some special callouts to CONNECT in its security considerations URL edit: I'm not claiming the same considerations apply here, just that server resource usage as a byproduct of proxying might be different than other request loads\nThat makes sense, I've written up to address these comments.\nThank you"} {"_id":"q-en-draft-ietf-masque-connect-udp-432c9b179cc86fd521e7595f72623d62c1402a270120ccb51f1f8fb9608a58f8","text":"I also want to point an issue due to this stance. The fact that we are not explicit and discuss the expectations and realities of different IP versions we have interesting failure cases due to this text from Section 3.1: “However, if the target_host is a DNS name, the UDP proxy MUST perform DNS resolution before replying to the HTTP request.” No where in the follow paragraphs in this section do it discuss the slight issue that a DNS resolution can return only an AAAA record and the MASQUE server may only support IPv4. What is the error response? Should this be a proxy-status response. I would not there are currently no response for “IP version not supported”.\nThese concerns apply equally to TCP proxying via HTTP. We should strive to provide parity to how CONNECT treats these matters. HTTP semantics don't say anything\n+1 to Lucas' meta-comment about striving for parity with CONNECT There is no such thing as a \"unsupported IP version\" DNS failure. Let's take an example where the client wants to CONNECT-UDP via the UDP proxy to the target . This particular host is configured such that querying AAAA will get a valid address, and querying A will yield a \"no answer no error\" DNS result. Let's assume that in this example the UDP proxy is IPv4-only. In that scenario, the UDP proxy will only query A (because it doesn't know what AAAA is) and it will get \"no result no answer\" so it will send\nNAME in light of the previous comments, can you clarify what you think is missing from the document please?\nFirst of all, I think that just answering lets do like Connect is a poor answer. There are benefits to alignment, but also one should ask if one are preventing useful functionality. In this case I think there are a relevant difference between what could be answered depending on how one implement stuff. Your suggest road says that for a IPv4 only proxy with a DNS name that has AAAA record but no A record the answer is: a) There is no answer For a node that would query both the answer could actually be: b) There is host, but I can't reach it as I don't support the right IP version, try another proxy. Sure, limiting ourselves to a) is easy, but is it the most useful for the protocol?\nDisagree it's a poor answer. An HTTP proxy that inplements CONNECT for UDP is likely going to implement CONNECT for TCP. How to treat name resolution for both of these has to be consistent. The Proxy-Status header field is the appropriate way to signal such failures. Any text more complicated than what we have now, should be pursued in a separate document IMO. Not added at this late stage. It would be straightforward to form up a document like \"DNS considerations for HTTP proxying\" that is additive to the specifications we have.\nI agree: Proxy-Status is the right place for this discussion, and CONNECT-UDP already recommends Proxy-Status. Also, as a technical matter, this is not how dual-stack connectivity works. As noted above, an IPv4-only UDP client will never issue a AAAA query for a destination hostname.\nI agree with what Lucas and Ben said above. Let's move this discussion to an extension draft.\nI agree that this discussion probably deserves to be in an extension draft. As is stands today, a DNS failure as described in this thread can just as easily result in a destination-unreachable / connection-closed error for the CONNECT-UDP stream, and an extension to provide detailed information as to why seems like the natural path forward.\nProxy status is absolutely the right thing to do here. The proxy-status is what we are already doing in production for CONNECT and CONNECT-UDP. The current text seems entirely appropriate for this: The only thing I could see saying further is to mention the error parameter, like this: I do not think we need any extension draft for this. This is very straightforward, and doesn't need to say much.\nThanks Tommy, I think that's a useful clarification. I wrote up PR based on your proposal to make this clearer\nOk, I am fine with this resolution.\nThank you for confirming Magnus!"} {"_id":"q-en-draft-ietf-masque-connect-udp-17a9895709ff7bd541e1ec5ab3726ce5dbe472f49114295c4c726bac217cc80e","text":"This was prompted by .\nLGTM. Nice and tight. URL Erik Kline has entered the following ballot position for draft-ietf-masque-connect-udp-14: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about how to handle DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: CC NAME Should the 2xx discussion here include additional reference to the extended commentary in masque-h3-datagram#section-3.2? I really wish there could be more said about tunnel security. I was tempted to try to formulate some DISCUSS on the matter, but I cannot seem to see where any more extensive text was ever written for the CONNECT method. Nevertheless, there are plenty of concerns that could be mentioned, like what should a proxy do if the target_host is one if its own IP addresses, or a link-local address, and so on. I'm not entirely convinced that it suffices to claim that because the tunnel is not an IP-in-IP tunnel certain considerations don't apply. Anything and everything can be tunneled over UDP, especially if RFC 8086 GRE-in-UDP is what the payloads carry (or ESP, of course). It seems like most of the points raised in RFC 6169 apply; perhaps worth a mention. But as I said, I think all these concerns apply equally to CONNECT (in principle, anyway), and the text here is commensurate with the text for its TCP cousin (that I could find, anyway)."} {"_id":"q-en-draft-ietf-masque-connect-udp-b8e27560cb26f1ac638096636cfdbe665920c03bde3f02c7e37819776d534494","text":"Should the upgrade token be \"connect-udp\" or \"masque-udp\" ? (issue based on by NAME\nI vote for or something along those lines. \"MASQUE\" is good as a working group name, but I don't think it's helpful in the protocol to help people understand what the variant of CONNECT does."} {"_id":"q-en-draft-ietf-masque-connect-udp-f1fa4c1702f6775f5e2278ea58b723eca37f0af9f7a5c2430f7c02922b34b506","text":"The default template's purpose is to enable compatibility with origin-only proxy configuration schemes. This is a worthwhile goal, but the default template is not a suitable solution: It violates RFC 8615's definition of .well-known/ as the location for metadata about an origin. A MASQUE service is not metadata. It is not representable as a .well-known \"URI suffix\" as required by the .well-known registry (unless we register all (2^128 + 2^32)2^16 distinct paths). It conflicts with better discovery mechanisms, like the one in the MASQUE development process. It is not compatible with the use of HTTPS records. (In the targeted legacy context, the client has no other DNS server that can be used when a proxy is in use.) This is somewhat ironic, as HTTPS records allow bootstrapping QUIC connections, and proxying for QUIC is the primary motivation for CONNECT-UDP. We should defer this problem to a subsequent draft, and provide a cohesive proxy setup flow, presumably with a standardized configuration object that could be served from .well-known/ and extended to cover CONNECT-IP, DoH and other relevant capabilities.\nFundamentally I disagree with your assessment that this solution isn't good and should be removed. This serves a real purpose. Let's go into the issues individually: Those points are not an argument to remove the default template, they are an argument to revert PR and switch back to a non-well-known prefix. Let's set those aside for now. I don't see a conflict here: the default template doesn't preclude using other templates that have been discovered through other means. Can you clarify what the conflict is? Can you clarify what the incompatibility is here? I don't understand what the relationship between the default template and HTTPS records is. Clients are welcome to query their own HTTPS records before using CONNECT or CONNECT-UDP. The way that the client picks its template has no impact on its DNS settings.\n... This is called \"squatting on the HTTP namespace\", and it would violate the in RFC 7320. It's also messy to implement, as it would require mixing MASQUE-specific \"routing\" rules with the user's application-specific routing rules. It doesn't prevent better discovery mechanisms, but it doesn't encourage them either. From the server side, there would be no way to \"opt out\" of the default path, due to the potential for clients that have implemented this specification but not a later, better discovery mechanism. Servers that would prefer a different path would still have to operate the default path for backwards-compatibility. The relationship is that the default template cannot express the existence of a DNS server affiliated with the MASQUE server. No, they are not, as this would violate the privacy guarantees implicit in the legacy proxy settings UI that this feature is trying to preserve.\nYou're conflating two types of proxies here: traditional enterprise proxies that speak CONNECT today and can be updated to speak CONNECT-UDP privacy proxies designed to protect a user's privacy from the network Folks won't ever be deploying both of these use cases on the same proxy. Your concerns are valid but they only apply to (2). (2) would never use the default template. The default template only applies to (1).\nClient software generally can't tell why a proxy is being used. It just knows that an HTTP CONNECT proxy has been configured with a particular protocol, domain, and port. Its behavior has to be correct for any existing use case. I don't see why privacy proxies wouldn't use the default template. Privacy proxies that are configured via the existing Secure Web Proxy configuration mechanisms (e.g. browser extensions, system settings) would have no other way to use MASQUE. Also, I think this distinction is false. Enterprises are free to use HTTP CONNECT proxies to conceal certain information from the network (e.g. using an employees-only browser extension), and enterprises are often opinionated about which DNS server ought to be used (not controllable via a browser extension).\nFor completeness, here's how I think this ought to look: URL\nYour discovery draft sounds great - however the default template doesn't get in the way of that. I don't think it's reasonable to remove a useful feature because we might have a more complete solution later when the useful feature doesn't get in the way of the future complete solution.\nShould the default URI template have a /.well-known prefix? For example, ? (issue based on by NAME\nWhile you are at it..., how about something simpler? would put the host and port in query parameters AND it would have smaller, shorter names.\nHaving implemented this, placing the variables in the path is easier to parse on the proxy than placing them in query parameters\nFrom the discussion in the room at IETF 113, it appeared folks were OK with . I'm going to pick that to close this issue, and we can open another issue during WGLC if we really want to bikeshed this further\nIt sound like you're writing your own URL parser? Any reasonable URL library will handle this for you.\nI don't think placing these things in path components by default is reasonable."} {"_id":"q-en-draft-ietf-masque-h3-datagram-84e22876b3374b92f7ee591218a325a0841cde9d80aadd58c24a9ef7534ea512","text":"In Section 2: When running over HTTP/1, requests are strictly serialized in the connection, therefore the first layer of demultiplexing is not needed. Is there any actual possibility to run more than a single request with HTTP datagram support over an transport connection? When one runs an HTTP request that enables datagram I don't see how it would be possible to close that request in any other way than closing the underlying transport connection. Or are there a way that I am missing that would enable one to close this request, and then issue another pipelined HTTP request that also uses HTTP datagrams? Is there a need to clarify this, or make it clear that only a single HTTP request with datagram can be sent?\nIn section 2, we've yet to mention anything about how the rest of the spec operates. So the comment in isolation is correct when talking about the general concept of multiplexin. I see your point but I'm not sure saying anything more here will help. Section 4.1 provides the details that implementers need to do the right thing.\nOkay, but lets include Section 4.1 also into this. I assume you are referring to this sentence: In HTTP/1.x, the data stream consists of all bytes on the connection that follow the blank line that concludes either the request header section, or the 2xx (Successful) response header section. But, per URL this body length length rules will be those that could govern a closure of a HTTP request enabling datagram. Would it be realistic to actually make it clear that if one enable HTTP Datagram transport close is the only, way or is that just creating additional rules making implementation even more? Otherwise do we need additional analysis if there are possibility that any of these rules could actually result in a closing a HTTP request?\nThere is no HTTP content on upgrade requests, we should link to the definition of Upgrade URL and make things slightly more clear in Section 4.1.\nOk, I guess it becomes a question of which method for enabling datagram one uses, if this is an upgrade or a new method. And I would note that Connect method appears to have some interesting exceptions compared to other methods. Which I think makes this into a requirement on the using method/upgrade that uses HTTP datagrams to actually specify this aspect.\nI've written up to clarify that pipelining is not supported."} {"_id":"q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde","text":"When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion."} {"_id":"q-en-draft-ietf-masque-h3-datagram-9377fb75b6b8dc453f586db481c5672b53409545de26f60f703db48a8e5f557d","text":"The draft currently states: However in PR , NAME points out that this \"state\" is not well defined. I agree, we should fix it. Thinking about this more, I don't think we need the concept of related state. We can instead just say that received datagrams MUST be silently dropped if the stream receive side is closed.\nI agree with removing the concept of state is better. I don't want to be overly nitty and the proposed sentence is better but there also nothing like \"the stream receive side\", as there can be multiple streams (open and closed), that's why I proposed Or if we don't want state, then it would be:\nThat's not quite right: there cannot be multiple streams, as datagrams are strongly associated with a client-initiated bidirectional stream. I'd phrase the paragraph as: open. If a datagram is received after the receive side of the corresponding stream is closed, the received datagrams MUST be silently dropped.\nThat works. Thanks!\nThis answer is necessary because QUIC doesn't guarantee ordering, but is this necessarily true indefinitely? But can I regard a datagram that arrives an hour after the stream closes as an error?\nIf you keep track of QUIC packet numbers, you can see that the peer closed the stream before it sent the datagram which means they violated a MUST. You're welcome to error if you detect that, but I don't think we should mandate this level of tracking.\nI'm not sure that that is a good strategy given how some deployments manage their packet numbers. I'm talking about maybe running a really long timer.\nThat introduces a way for an attacker to cause connection closure by delaying a specific packet. What problem are you trying to solve?\nGood point. I thought that we did similar in a few places in QUIC. As you say, it's possible to enforce, but tricky. The concern I'd have is that this effectively forbids enforcement, but maybe that's OK."} {"_id":"q-en-draft-ietf-masque-h3-datagram-4fcf79eab3d9a5234fc02d442a9bde8861e6afbf6aa8855739e6adf1b46c711b","text":"End of section 3.5 () says: First of all this paragraph seems to be in the wrong section because it's more generally about the capsule protocol and not the datagram capsule. However, it's also confusing. I believe if you negotiate http datagram with the H3DATAGRAM SETTINGS parameter, you always need to also support capsules even if QUIC datagrams are supported, as the other end (e.g. an intermediary) could send you anyway a capsule. Or is this text meant to say that new extensions that use the http datagram but have a new settings parameter for negotiation may not need to support the capsule protocol? I think it would be good to clarify both: what this sentence/paragraph means and what is required if you negotiate http datagrams using the H3DATAGRAM SETTINGS parameter.\nI want to extended on this issue a bit more. A lot about the implementation requirement on different nodes are implied rather than being explicit. \"Secondly, I will note that the above text combined with what is written in Section 3.4: Intermediaries MAY use this header field to allow processing of HTTP Datagrams for unknown HTTP Upgrade Tokens; note that this is only possible for HTTP Upgrade or Extended CONNECT.\" If an intermediary is doing this unaware re-encoding and the HTTP extension is an extended connect version it can result in interoperability failure when a Datagram capsule arrives at the endpoint that only supports HTTP/3 Datagrams over QUIC Datagram.\nIf we build off the notion that H3DATAGRAM setting communicates a willingness to receive the HTTP/3 datagrams (in QUIC datagrams), they still require some request to bootstrap the association, otherwise those datagrams would be ignored. In theory, I could define a new method called \"ALWAYSUNRELIABLE\" that can be used to bootstrap a request stream and it be defined to never use a capsule protocol. If a server sends me data on that stream, I could ignore it or throw an error. I don't need a new setting to caveat this behaviour.\nNAME but then you can't have \"blind\" re-encoding of HTTP/3 Datagram to Datagram capsules, I think it is that simple. So we need to choose if HTTP Intermediaries can re-encode when they don't know the Upgrade Token or if they can't and must be aware and thus enable cases where some have different behaviors. See\nThere is no such thing as \"blind re-encoding\". Intermediaries can only perform re-encoding when they know that the capsule protocol is in use. Intermediaries can learn that by knowing the HTTP Upgrade Token, or by watching for the Capsule-Protocol header. But I think we can make that clearer by spelling out in the \"DATAGRAM Capsule\" section that intermediaries MUST NOT re-encode to capsules unless they know for sure that the capsule protocol is in use.\nPR did not address the editorial point that this paragraph is in the wrong section.\nI don't agree with that editorial point. The paragraph is about the interaction between HTTP datagrams and the capsule protocol, so having in the DATAGRAM capsule section makes sense to me.\nIt about the interaction about http datagrams (section 2) and the capsule protocol (section 3). There is nothing related to datagram capsules (section 3.5) in this paragraph. I guess you could just add a new subsection 3.6 if you prefer to have it at the end.\nLGTM"} {"_id":"q-en-draft-ietf-masque-h3-datagram-40eae41d74d2051e475083309a2a95791fcd23ce97c8dde18646a43a1d74a532","text":"Written in response to\nFrom in reference to the text in this PR: Issues ietf-wg-masque/draft-ietf-masque-connect-udp-listen (+0/-2/0) 2 issues closed: Improve document title URL [editorial] Should \"connect-udp-listen\" be registered as \"Connect-UDP-Listen\"? URL ietf-wg-masque/draft-ietf-masque-quic-proxy (+0/-0/2) 1 issues received 2 new comments: Virtual connection IDs in multi-hop settings (2 by DavidSchinazi, mirjak) URL Pull requests ietf-wg-masque/draft-ietf-masque-connect-udp-listen (+2/-2/1) 2 pull requests submitted: Update Listener to bind (by asingh-g) URL update connect-udp-listen to use H1 style (by asingh-g) URL 1 pull requests received 1 new comments: Compress away IP and Port using Context IDs (1 by asingh-g) URL 2 pull requests merged: Update Listener to bind ( URL update connect-udp-listen to use H1 style, URL ietf-wg-masque/draft-ietf-masque-quic-proxy (+0/-0/3) 1 pull requests received 3 new comments: Proxy-Chosen Virtual Client Connection ID (3 by DavidSchinazi, ehaydenr) URL Repositories tracked by this digest: URL URL URL URL URL URL URL\nRoman Danyliw has entered the following ballot position for draft-ietf-masque-h3-datagram-10: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about how to handle DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you to David Mandelberg for the SECDIR review. ** Section 4. Consider adding the following clarification: NEW HTTP Datagrams and the Capsule Protocol are building blocks for HTTP extensions to define new behaviors or features and do not constitute an independent protocol. Any extension adopting them will need to perform an appropriate security analysis which considers the impact of these features in the context of a complete protocol."} {"_id":"q-en-draft-ietf-masque-h3-datagram-0f240ea7e78ea93b681426458668d2e355f32d72e9eb6c22302dbdfdfc75e156","text":"This PR aims to take the design discussions we had at the last interim, and propose a potential solution. This PR should in theory contain enough information to allow folks to implement this new design. That will allow us to perform interop testing and confirm whether the design is sound.\nI'm confused by this pull request (probably because I was out of the loop for a while). My understanding, at least when I first read the proposal a month ago was that we would separate multiplexing streams and multiplexing contexts into different layers, so that methods that need context would get it (CONNECT-UDP), and features that don't (WebTransport/extended CONNECT) could use stream-associated datagrams directly.\nNAME you're referring to which this PR isn't trying to address. That'll happen in a subsequent PR.\nIn , capsules, contexts and context IDs are explicitly defined as end-to-end and there are requirements for intermediaries to not modify them, and to not even parse them. The motivation for this was to ensure that we retain the ability to deploy extensions end-to-end without having to modify intermediaries. NAME mentioned that these requirements might be too strict, as he would like to have his intermediary be able to parse the context ID, for logging purposes.\nPart of the comment from the PR is also about consistency of interface for generic HTTP libraries that can be used to build either endpoints or intermediaries. If the context ID is always present, it's overly prescriptive to say that an intermediary can't parse it, and such a requirement couldn't be enforced anyways. If context IDs are negotiated on/off, whether this is possible depends on the negotiation mechanism.\nI think its fine for a component that has bytes passing through it to look at those bytes. The import thing to do is set expectations on what that means for the protocol behaviour we are defining. We don't expect intermediaries to do anything with a context ID; they should not enforce the protocol semantic rules (odd, even, reuse etc). H2 and H3 somewhat solve this problem by decoupling frame parsing from frame payload interpretation. E.g. if the form of a request or response is invalid, an intermediary shouldn't pass it on. DATAGRAM is different because it isn't defined as an actual HTTP/3 frame with frame headers and frame payload. Something that might work is to formalise that quarter stream ID is part of the envelope that intermediaries have to be able to handle the syntax of, but context ID is part of content and separate.\nUnless you encrypt capsules end-to-end, I don't think any text in the spec can in practice prevent intermediaries from inspecting and modifying capsules. You can send something like GREASE capsules, though.\nIn , we introduce the CLOSEDATAGRAMCONTEXT capsule which allows endpoints to close a context. That message currently only contains the context ID to close. NAME suggests that we may want to add more information there. Should we differentiate between \"context close\" and \"context registration rejected\"? Should we add an error code? In particular, do folks have use cases for these features?\nMy first concern is with infinite loops. If the context was \"garbage collected\" by the recipient, the sender can simply re-create it if they still want it. However, if it was closed due to incompatibility (i.e. rejected) or hitting a resource limit (e.g. max # of contexts), reopening it in response will produce a tight infinite loop. This could be maybe be avoided by some kind of heuristic, but an explicit indication of what's going on seems better.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nDuring the 2021-04 MASQUE Interim, we discussed multiple options for registering context IDs. Without going into the specific of the encoding, there are two broad classes of solutions: \"Header\" design Once-at-setup registration This happens during the HTTP request/response Simplest solution is to use an HTTP header such as (encoding TBD) Example use-case: CONNECT-UDP without extensions \"Message\" design Mid-stream registration This can happen at any point during the lifetime of the request stream This would involve some sort of \"register\" message (encoding TBD, see separate issue) Example use-case: CONNECT-IP on-the-fly compression (this can't be done at tunnel setup time because the bits top be compressed aren't always known at that time) It's possible to implement once-at-setup registration using the Message design, but it isn't possible to implement mid-stream registration. Therefore I think we should go with the Message design. It would also be possible to implement both but I don't think that provides much value.\nI'd like to here some more from the WebTransport use case on the issue. My understanding is that the Requests are used to create WebTransport sessions and that creation of datagram flows(contexts) could be a lot more ad-hoc / server generated after the request phase, when compared to the CONNECT-UDP case. Cc NAME\nPersonally, I'd vote for either \"message\" or \"message + header\" (a la priorities), but not just \"header\". Mid-stream annotation of context/flow/etc is quite useful.\nI prefer \"Message\" only, since the dynamic use case seems like a fairly compelling use of sub-flows(contexts?). I'm not sure there are many cases when one allocates all sub-flows up front AND needs to use a header to do that in a flexible way. If an application needs multiple sub-flows, but they're fixed in number/format(ie: data and control), I don't think there's a need for a header or a \"Message\", since the application can consider that part of its internal format.\nDuring the 2021-04 MASQUE Interim, we discussed having the ability to accept or reject registration of context IDs. An important motivation for this is the fact that context ID registrations take up memory (for example it could be a compression context which contains the data that is elided) and therefore we need to prevent endpoints from arbitrarily making their peer allocate memory. While we could flow control context ID registration, a much simpler solution would be to have a way for endpoints to close/shutdown a registration.\nIIUC there's an opposing view that the constraints can be negotiated apiori. Either statically (such as defining an extension which states that there can only be N contenxts of a given type) or dynamically using something like a HTTP header. Failure to meet these constraints could lead to an HTTP/3 stream or connection error rather than a flow rejection.\nNAME I've filed to discuss whether to register/negotiate at request start or mid-stream.\nI think that if there is the ability to register mid-stream (), we need the ability to close/de-register/reject mid-stream as well. If we don't have mid-stream changes, then no need to negotiate. However, someone trying to register something needs to be able to know whether or not the thing they registered will be accepted.\nI agree that notification of the registration status is particularly important if we allow mid-stream registration.\nIn draft-ietf-masque-h3-datagram-00, flow IDs are bidirectional. During the 2021-04 MASQUE Interim, we discussed the possibility of making them unidirectional. Here are the differences: Bidirectional single shared namespace for both endpoints (even for client, odd for server) it's easier to refer to peer's IDs, which makes negotiation easier Unidirectional one namespace per direction every bidirectional use needs to associate both directions somehow I'm personally leaning towards keeping them bidirectional. Unlike QUIC streams, these do not have a state machine or flow control credits, so there is no cost to only using a single direction of a bidirectional flow for unidirectional use-cases.\nThere's another option where flows are unidirectional but they share a namespace (cleint even/server odd). I'm undecided. And I also don't know how important it is to choose. The 2 layer design means that DATAGRAMs always have a bidirectional tie via the stream ID. On balance, I think bidirectional would be good enough today but I would be receptive to extension use cases that make a strong point of unidirectional.\nBidirectional is my personal preference, but either way could work ultimately. It seems to me that any extension that wants unidirectional can just only use this context/flow/etc ID for one direction if it wants.\nI think bidirectional flows are likely overcomplicated and unnecessary. For example, if either peer can \"close\" a flow, this requires a much more complex state machine. In my experience, half-closed states are hard to get right. Closing unidirectional flows seems much easier to reason about. Many, perhaps most, of our use cases for flows are really unidirectional. For example, DSCP marking likely only makes sense from client to server, while timestamps for congestion control tuning likely only make sense from server to client. QUIC connection IDs are different in each direction, so QUIC header compression may be easiest to understand with unidirectional flows.\nBidirectional doesn't imply half-close states. I'm thinking of a bidirectional flow that can only be closed atomically as a whole. I don't think that's true. The default extension-less CONNECT-UDP is bidirectional. So is ECN, and so would a timestamp extension.\nI would prefer bidirectional without support for half-close. I feel that having unidirectional flow/context IDs adds overhead we don't need; peers now need to track those associations and it's unclear what the benefits of having them separated out are.\nThis issue is conflating consent (unilateral/bilateral) with scope (unidirectional/bidirectional). IIUC there are two questions here: 1) Can I send a flow-id forever once declared, or can the endpoint tell me to stop without killing the whole stream (consent)? 2) Can the receiver of a flow-id assignment use that same flow-id when sending datagrams, or could this mean something completely different? I think the answer to (1) is \"yes, it can tell me to stop\" (i.e. it is negotiated while allowing speculative sending). For (2), I lean toward bidirectional, but not strongly. There are plenty of codepoints, so it seems wasteful for each endpoint to define the same semantic explicitly.\nNAME the intent of this issue wan't to conflate those two things. This issue is about scope. For consent, see issue .\nConceptually, I like unidirectional better. I tell the peer that when I send with context-ID=X, it has a given semantic. For bidirectional the sender is saying both that and that if the peer sends on that ID the original sender will interpret with the same semantic. That might not even make sense. Is it possible to associate datagrams with a unidirectional stream (push?). If so, does allowing for bidirectional context imply that even though the receiver cannot send frames on the stream, it can send datagrams? Every negotiation does run the risk of loss/reordering resulting in a drop or buffering event at the receiver. The appeal of bidirectional is that it removes half of this risk in the cases where the bidirectionality is used.\nTo second NAME I would also prefer bidirectional without support for half-close. Given you can always not send something in one direction, and there's none of the overhead of streams(ie: flow control, etc) I can't see any benefits of unidirectional over bidirectional in this case. Associating h3-datagrams with a server push seems like unnecessary complexity to me, but you're right that it should be clarified.\nDuring the 2021-04 MASQUE Interim, we discussed a design proposal from NAME to replace the one-layer flow ID design with a two-layer design. This design replaces the flow ID with two numbers: a stream ID and a context ID. The contents of the QUIC DATAGRAM frame would now look like: While the flow ID in the draft was per-hop, the proposal has better separation: the stream ID is per-hop, it maps to an HTTP request the context ID is end-to-end, it maps to context information inside that request Intermediaries now only look at stream IDs and can be blissfully ignorant of context IDs. In the room at the 2021-04 MASQUE Interim, multiple participants spoke in support of this design and no one raised objections. This issue exists to ensure that we have consensus on this change - please speak up if you disagree.\nI am fine with the 2-layer design for CONNECT-UDP, but I am not sure if the second identifier should be in H3 DATAGRAM or should be specific to CONNECT-UDP. As I indicated to the list, webtrans and httpbis might weigh in on this with broader consideration of other use cases.\nHi NAME the topic of the optionality of context IDs is discussed in issue .\nI think this is a good design, because it minimizes the core functionality of the draft, but avoids every application that needs multiple dynamically created sub-flows from re-inventing a different mechanism. Nit: I would not call this a 'Frame', since it's not an HTTP(TLV) frame.\nI think this is a good change. Thanks! There are some things that I think we should still discuss, but we shouldn't hold up this change on those details."} {"_id":"q-en-draft-ietf-masque-h3-datagram-caa4d1ec37e10b968165adb90a21294cb24e107ce3e6931ff5c7b082860ae808","text":"NAME points out that in order to correctly forward DATAGRAMs between transports where the QUIC DATAGRAM frame isn't always available, intermediaries need the ability to parse some capsules.\nCAPSULE was easier to understand when it was fully opaque. When should I choose to define a semi-opaque CAPSULE vs defining a new frame? Aren't both constructs effectively hop-by-hop?\nNAME you're the one who asked for this :-) The benefit of a semi-opaque capsule is that it allows defining new capsules without having to update every single intermediary in the chain, because intermediaries forward all unknown capsules."} {"_id":"q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171","text":"Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs."} {"_id":"q-en-draft-ietf-mops-streaming-opcons-3f831b38617c92f4702dca03b19f1ba572e09dd0e5c7d2e9a1e97d70a2af3472","text":"LGTM\nFrom NAME The very first sentence is mostly unparseable due to its length (7 lines). Please consider splitting it.\nI added this, from NAME review: This section and the next one should be a list with 2 bullets rather than 2 sections to make the text more readable."} {"_id":"q-en-draft-ietf-mops-streaming-opcons-6f6fa08840d6bed8b6d9ef7446c10a3ef40eb24ad906af69a2ddffa2d7b024a3","text":"From NAME Please consider using the same wording for the title as for 6.2 and 6.1 (nit) Who is \"we\" in \" We do have experience\" ? While I share some concerns about deploying a new transport in user space as opposed to more stable and controlled kernel space, I wonder whether this paragraph is relevant in this document. Does it bring any value ? If no, then please remove it.\nWe'll look at the date for CUBIC being enabled by default in the Linux kernel as \"experience\". We'll delete the paragraph as suggested.\nwas fixed elsewhere.\nLGTM. I'm almost sad to lose the ref to \"Congestion Defense in Depth\", but I guess it makes good sense, that's not a consensus position."} {"_id":"q-en-draft-ietf-mops-streaming-opcons-47941fc5f599ded465ee51781d15d391b501f493163c75db224bc85dbb77feb6","text":"Fixed a few typos, otherwise ready to merge.\nMerging on Ali's review.\nFrom NAME (URL) (In Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely) Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section"} {"_id":"q-en-draft-ietf-mops-streaming-opcons-ae7aac4852094283576ec1bfa141dd2791b5e4667b8bcafb9644a1bc7008ad2f","text":"NAME - several things here. we probably don't need all of the descriptive text - the idea is that we'd say \"assume roughly this protocol stack, and here's what's not included\" if distinguishing between \"transport protocol\" and \"media transport protocol\" makes sense, that probably needs to be reflected elsewhere in the document (like, a lot of elsewheres) If this approach works, it could either be in the transport protocol section, or in the one you're thinking of for\nI don't think there will be a lot of places. We need to pay attention to only the places where we are explicitly referring to the media transport protocol. The rest can stay the same IMO.\nNAME - I couldn't reply to this inline, but I looked through the document (searching for \"transport protocol\", and you're right. I found one place in Section 4.1, and changed the description of RTP as a transport protocol to RTP as a media transport protocol. NAME - Section 4.1 is before where my explanation is now - if this PR makes sense to you, the explanation probably DOES belong much earlier in the doc, perhaps in .\nI'm merging this, based on Ali's review.\nWe want a paragraph to say something about scope along the lines of: This document is for endpoints delivering streaming media traffic. This can include intermediaries like CDNs as well as content owners and end users, plus their device and player providers. We think this can help address a few issues from last call/iesg review, including: (more to be linked as we notice them)\nConsider adding a subsection in the introduction about things that are out of scope, and (for extra credit), why they are out of scope.\nfrom NAME 's review (URL) Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.)\nNot sure if this is really about scope clarification? I think this section is intending to talk about usage as part of a transport protocol, but tunnels are sort of more like a path overlay issue or something (might affect mtu, extra hops with extra things that can go wrong, etc.).\nNAME - I'm looking more closely at this issue, and I think text that I put in another (individual) draft may be useful here as well. I'll add it in this section (making the obvious substitutions), and I'll see what NAME and NAME think (after the holiday weekend), and then check back with you. If it works for people, the result will likely be clearer to many readers. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section"} {"_id":"q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438","text":"…transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section"} {"_id":"q-en-draft-ietf-mops-streaming-opcons-252d796f6af8ff404534dba318b92520acd314811f98260d06a21aad660d157b","text":"I actually used the URL in the draft to clone a copy, so I think this is good to go."} {"_id":"q-en-draft-ietf-mops-streaming-opcons-f1c9c5fa2cdfb521b36b9c9d7d337b69a50f8b4eeb54ea7b877882cf66ea9b26","text":"Discussion from editor conference call: NAME will update this PR with NAME comments\nFrom NAME (URL) Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section (Seems closely related to URL )\nThe purpose of such tunneling is to circumvent geoblocks and access content from other regions - not to protect the media content. Encrypting the content is done by the content owner for its protection.\nSo, looking at , I'm seeing this text: I think this means that the effect is that the \"transport-level information is opaque to intermediaries\" effect described in the document applies, with the additional consideration that intermediaries can't see the IP addresses used inside the tunnel, either. I think the same applies to any IPsec-encrypted tunnel. Do we need to say that? And, if so - can we describe this without using product names and descriptions which may age badly in an RFC?\nisn't as easy to read, but I think this is also the rough equivalent of IPsec, from a media operator perspective - all of the transport information and the IP addresses used inside the tunnel are obscured.\nJust for the record, definitely worth checking as it is related to this discussion: URL Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section"} {"_id":"q-en-draft-ietf-mops-streaming-opcons-d2797a38ced29b1128e95418a408b6619a0cba6da7e8a35dfe9aa8d8178cfef2","text":"looks good.\nFrom NAME\nNAME and NAME - I'm not sure who wrote this text, except I don't think it was me. >And as with other parts of the ecosystem, new technology brings new challenges. For example, with the emergence of ultra-low-latency streaming, responses have to start streaming to the end user while still being transmitted to the cache, and while the cache does not yet know the size of the object. Some of the popular caching systems were designed around cache footprint and had deeply ingrained assumptions about knowing the size of objects that are being stored, so the change in design requirements in long-established systems caused some errors in production. Incidents occurred where a transmission error in the connection from the upstream source to the cache could result in the cache holding a truncated segment and transmitting it to the end user's device. In this case, players rendering the stream often had the video freeze until the player was reset. In some cases the truncated object was even cached that way and served later to other players as well, causing continued stalls at the same spot in the video for all players playing the segment delivered from that cache node. NAME was curious if there's a reasonable reference that we could provide for this.\nYeah, that was me. There was not a public reference for this, alas, just a word of mouth anecdote.\nNAME - I agree with you about your comment on Section 3.6. It happens that I got comments that Section 3.6 and 3.7 didn't need to be nearly as detailed as they were, so they were heavily rewritten. The text where \"throttled\" was used is now in Section 3.6.1, and the emphasis is now on the cause of unexpected traffic profile changes, rather than the ISP response to the unexpected traffic profile changes in this specific incident, so the text that included \"throttled\" has been removed.\nNAME - I agree with your comment on Section 4.4, that some description or reference should be provided for the word \"manifest\". The next occurrence of \"manifest\" is in Section 5.2, which does provide a description. I don't think that's quite right - I'd say So I'm changing this definition, and moving it to the first occurrence of \"manifest\" in Section 4.4. Thanks for helping us fix problems you didn't even tell us about. :upsidedownface:\nlgtm"} {"_id":"q-en-draft-ietf-mptcp-rfc6824bis-0fa51282ede46a8dd9b38fbebaa26c2dc20645d4539b6cbb064e6643c87d3aed","text":"Adding Olivier's proposed text about the Fast-Close (cfr., URL) and integrated a shorter version of URL Hello, Sorry for being late with this, but the beginning of the fall semester is always too busy for me. During the summer, there was a discussion on the FASTCLOSE option that was triggered by Francois. The current text of RFC6824bis uses the FASTCLOSE option inside ACK packets, but deployment experience has shown that this results in additional complexity since the ACK must be retransmitted to ensure its reliable delivery. Here is a proposed update for section 3.5 that allows a sender to either send FASTCLOSE inside ACK or RST while a receiver must be ready to process both. Changes are marked with Fast Close Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, a reguler RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MPFASTCLOSE option. MPFASTCLOSE is used to indicate to the peer that the connection will be abruptly closed and no data will be accepted anymore. The reasons for triggering an MPFASTCLOSE are implementation specific. Regular TCP does not allow sending a RST while the connection is in a synchronized state [RFC0793]. Nevertheless, implementations allow the sending of a RST in this state, if, for example, the operating system is running out of resources. In these cases, MPTCP should send the MPFASTCLOSE. This option is illustrated in Figure 14. 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +---------------+---------------+-------+-----------------------+ +---------------+---------------+-------+-----------------------+ +---------------------------------------------------------------+ Figure 14: Fast Close (MPFASTCLOSE) Option If Host A wants to force the closure of an MPTCP connection, it has two different options : Option A (ACK) : Host A sends an ACK containing the MPFASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSEWAIT state. Option R (RST) : Host A sends a RST containing the MPFASTCLOSE option on all subflows, containing the key of Host B as declared in the initial connection handshake. Host A can tear the subflows and the connection down immediately. If a host receives a packet with a valid MPFASTCLOSE option, it shall process it as follows : o Upon receipt of an ACK with MPFASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). o As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole connection (transition from FASTCLOSEWAIT to CLOSED states). If Host A receives an MPFASTCLOSE instead of a TCP RST, both hosts attempted fast closure simultaneously. Host A should reply with a TCP RST and tear down the connection. o If Host A does not receive a TCP RST in reply to its MPFASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MPTCPRST has been sent), it SHOULD retransmit the MPFASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST with the MPFASTCLOSE option itself when it releases state in order to clear any remaining state at middleboxes. o Upon receipt of a RST with MPFASTCLOSE, containing the valid key, Host B tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). The burden of the MP_FASTCLOSE is on the host that decides to close the connection (typically the server, but not necessarily). The text above allows it to opt for either ACK or RST without adding complexity to the other side. Another point that might be discussed in RFC6824bis is how a host should react when there are all the subflows associates to a Multipath TCP connection are in the CLOSED state, but the connection is still alive. At this point, it is impossible to send or receive data over the connection. It should attempt to reestablish one subflow to keep the connection alive. Olivier"} {"_id":"q-en-draft-ietf-ppm-dap-3dbd11bcdb0f60c4d14798aae39fdcd30d9715e6a9c39b4be659c93deceb900a","text":"All protocol request messages contain a field, which allows aggregators to service multiple tasks from a single set of endpoints. The exception is : since it is an HTTP GET request, there is no body, and thus no place for the client to indicate the desired to the aggregator. We don't want to introduce a request body to , because then it would have to be a POST instead of a GET, and that would make it harder for implementations to cache HPKE config values. So instead we have clients encode the task ID in RFC 4648 Base16 (a.k.a. hexadecimal) and provide it as a query parameter. See issue\nMy goal here is to enable our aggregator implementations in the interop target to use one HPKE config per task. The alternative to this would be for us to use aggregator endpoints that somehow incorporate the task ID, as I described , which is already permitted by the spec as written.\nThe Base64 alphabet includes , and , which means you have to URL encode (a.k.a. percent encode) Base64 strings before you can use them in a URL, which is a chore and yields less readable URLs (not that 64 character hex strings is especially user-friendly). An alternative would be to use RFC 4648's , omitting the padding characters as suggested in since task IDs are of a known length.\nGotcha. I think the main thing is to be consistent. I would either: Change this PR to use base64url; or Tack on a commit that makes all encodings of the taskID (except those that appear in the body, of course) the same (i.e., hexadecimal). I think (2.) would be my preference.\nI agree with the consistency argument. I'm leaning towards using everywhere because it's more compact than Base16/hex, and also I think implementations are less likely to be surprised by case requirements in base64url (i.e., it's obvious that base64 is case sensitive, but you have to read RFC 4648 attentively to notice that it mandates uppercase hex strings).\nBase64Url sounds good to me. I took a quick look and it appears that most standard base64 libraries also support the URL encoding.\nI will clean this up and merge it once I have implemented it in Janus and convinced myself it's sound.\nSGTM, but why not base64 encode the taskID for consistency with URL"} {"_id":"q-en-draft-ietf-ppm-dap-ba6fe7cb322f785ae38eeb10f49836860c8d9f6b92997e557a8553f75932bf07","text":"Justify text after bullets consistenly Make sure all lines break at 80 characters\nLGTM, modulo conflicts. It'd be nice if we had a linter that blocked PRs until their formatting was right, since changes like this taint git history."} {"_id":"q-en-draft-ietf-ppm-dap-78b8d166ec82e028aca31c9ef93e73acf6e585f0e77830eda2d549677924e99c","text":"As of draft-irtf-cfrg-vdaf-05 (not yet published), the random coins used for sharding are an explicit parameter of . Add this parameter and explasin how it is generated.\nNAME I believe this is the only API change we need to account for, correct?"} {"_id":"q-en-draft-ietf-ppm-dap-d63ddb0d0f4ffde1f874c218be357219786e0bfcfc3cd307efd08c805c8691fb","text":"All references to \"metadata\" are now to \"report_metadata\". References to \"CollectReq\" are now \"CollectionReq\". References to \"CollectResp\" are now \"Collection\".\nThe current version of the draft renames field \"metadata\" to \"report_metadata\" in the Report structure, but does not do the same renaming in the ReportShare and InputShareAad structures. I think this naming should be consistent, though I don't have a preference between the alternatives.\nThanks for pointing this out! Can you send prepare a PR?\nFixed in\nThanks a bunch NAME"} {"_id":"q-en-draft-ietf-ppm-dap-9dac498d4501415d9e8596627a45c3d454556df410d276e542240de9dd1787b1","text":"Update the aggregate request text to reflect recent protocol changes. Relax the requirement that all sub-requests use the same key config. Do URL, which was previously merged into the wrong branch.\nNAME NAME please have a look when you can.\nPer , the first step in the upload protocol is a POST to the leader server which returns information about helpers. In some yet to be specified cases, the response could include a challenge to be incorporated into reports. However I suspect that in most cases, the response will be static and so an idempotent, cacheable GET would be a better fit, especially since it would save clients a roundtrip to the helper on the hot path of submitting reports, and would make it possible for helpers to implement uploadstart by putting some static content in a CDN. HTTP GET requests don't have bodies, so we would have to figure out how to encode all the data in the current into the URI, which could be done as either path fragments in the URI (i.e., ) or as query parameters (i.e., ). We would of course need to make sure that don't preclude proof systems like the one alleged to exist in section 5.2 of \"BBCp19\".\nAlso, because the current protocol uses a POST for uploadstart, the implication is that for every report they submit, clients need to hit and then . If we make idempotent and cacheable, then that request can be skipped in the majority of cases. I suppose the significance of this depends on how we expect clients to use PDA: are they going to be continuously streaming hundreds or thousands of reports into the leader as events of interest occur (in which case I think eliminating requests is interesting) or are they expected to accumulate several reports locally before submitting them all in one or a few requests?\nI agree this would be a good change, if we can swing it. It's notable that Prio v2 and HH don't require a random challenge from the leader. But as discussed, this is useful for other proof systems one might want to implement for Prio. For example, see the data type implemented here: URL See below for an explanation. There might be other ways to implement the challenge. One possibility is to derive the challenge from the TLS key schedule (or from whatever state the client and leader share for the secure channel). NAME and NAME have both pointed out that adding this dependency on the underlying transport is problematic. Another possibility might be to use some external source of randomness that the leader trusts. comes to mind, though it is somewhat problematic because clients would effectively use the same randomness for some period of time. The security model of BBCp19 would need to be extended to account for this. (I believe this could be done.) Some other solution I'm missing? _ This data type is used to compute the mean and variance of each integer in a sequence of integers. To do so, the client encodes each integer of its input vector as a pair , where is the representation of as a bit vector and is equal to . To prove validity of its input, the proof needs to do two things: verify that (a) is indeed a bit vector, and (b) the integer encoded by is the square root of . In order to check these things efficiently, the proof system uses a random challenge generated by the leader. At a very high level, the client wants to prove that the following boolean conjunction holds: is 1 or 0 AND is 1 or 0 AND ... AND encodes the square root of . In Prio, we need to represent this conjunction as an arithmetic circuit, which the random challenge helps us do. In a bit more detail now: Let's rewrite this expression as w[0] AND w[1] AND ... AND w[s]. As an arithmetic circuit, this expression is actually , where if and only if w[i] is true. If the input is valid, then the expression should evaluate to . But because we're talking about field elements, a malicious client may try to play games like setting and so that the sum is still , but the statement is false. To detect this, we actually compute the following expression: , where is a randomly chosen field element. If the statement is true, then the expression evaluates to . If the statement is false, then with high probability, the expression will evaluate to something other than . (This idea comes from Theorem 5.3)\nThank you for the detailed explanalation NAME Having thought about it more, I suspect an HTTP GET wouldn't be the right way to support proof systems that need a challenge. Assuming that a unique challenge is needed for every report[^1], then intuitively the leader would have to maintain a mapping of reports to issued challenges so that the leader and helper(s) can later use the challenge value when evaluating the client's proof. But now that the leader has to maintain state per-report, I think the upload protocol is missing a notion of a report identifier. The contains a , but IIUC the will be the same across many reports[^2] (e.g., if a browser is sending daily reports on how often users click a button, they use the same every time they submit). So needs to include a report ID, which must be echoed back to the leader in so that the leader can look up the challenge for the report. [^1] Is this true? Or would it suffice for each to use the same challenge across multiple reports? [^2] I notice that the design doc lacks a strong definition of a report and how it relates to tasks, inputs, measurements: because GET should be \"safe\" in the sense of not changing any server state, and it seems like the server would have to maintain a mapping of reports to issued challenges so that the leader and helper can later use the challenge value when evaluating the client's proof.\nContinuing on the assumption that causes a state change on the leader, POST is then the appropriate HTTP method to use. , though it's a little unusual. There may be some big drawback to cacheable POST responses, but I think that might let us support proof systems that require live challenges while also eliminating the majority of requests when the proof system is such that the will change very rarely.\nThe requirement for the leader to keep state across upload start and upload finish may be avoidable. Suppose the leader has a long-lived PRF key K. It would could choose a random nonce N, compute R = PRF(K, N) as the challenge, and hand (N, R) to the client in its upload start response. The client would then use R to generate the proof and send N to the leader in its upload finish request. The leader could then re-derive R. NAME and I kicked around the idea of having the client generate a report ID. I think this may end up being useful, but we weren't sure if it was better to have the leader choose it or the client. To conform to the security model of BBCp19, each report would need its own challenge. It may be possible to relax this, but this would be risky.\nI like this a lot! I'm a big fan of deterministic derivation schemes like this -- besides reintroducing the possibility of an idempotent GET, eliminating storage requirements for challenges makes it much easier to scale up leaders. If the report IDs are big enough (UUIDs?) we could even use them as . I would say the leader, because if you let the client do it, then the leader would have to check for things like collisions with existing report IDs or other malicious ID choices. If the leader chooses it, then all it has to do is generate a few bytes of randomness. I agree.\nNAME we'd appreciate your feedback on this issue because it involves security considerations for ZKP systems on secret-shared data [1]. I'll quickly sum up the problem so that you don't need to worry about the conversation above. The problem at hand regards systems like [1, Theorem 5.3] that require interaction between the prover and verifier. Specifically, we're considering a special case of Theorem 5.3 in which the verifier sends the prover a random \"challenge\" that is used to generate and verify a (non-interactive) FLPCP (call it ): We refer to the prover's first and second messages as \"upload start\" and \"upload finish\" respectively. Each message corresponds to an HTTP request made by a web client to a web server. Ideally, the verifier would be able to handle these requests statelessly, i.e., without having to store . Consider the following change. Suppose the verifier has a long-term PRF key . In response to an upload start request, it chooses a unique \"upload id\" --- maybe this is a 16-byte integer chosen at random --- and computes and sends to the prover. In the upload finish request, the prover sends so that the verifier can re-derive instead of needing to remember it: The (potential) problem with this variant is that a malicious client is able to replay a challenge as many times as it wants. This behavior isn't accounted for in Definition 3.9, so when you try to \"compile\" this system into a ZKP on secret-shared data as described in Section 6.2 and consider its security with respect to Definition 6.4 (Setting I), you can no longer appeal to Theorem 6.6 to reason about its security. Our question is therefore: What impact do you expect this change to have on the soundness of the system? Can you think of an attack that exploits the fact that challenges can be replayed? My expectation is that the problem is merely theoretical, i.e., we ought to be able to close the gap with an appropriate refinement of Definition of 3.9 and reproving Theorem 5.3. [1] URL\nThanks for looping me in. In the discussion below, I am assuming that the client sends its data vector (called \"x\" in the paper [1]) at the same time as it sends its proof to the verifier. Even if the client sends its data vector in the first message, it seems that the client can reuse an (R,N) pair from upload request 1 to later submit upload request 2, in which case the client sees (R, N) before it chooses its data vector and proof. So I think we can assume that the client's data vector and proof can depend on the verifier's random value R. (I'll call this value \"r\" to match the paper.) For soundness: the client/prover must commit to its data vector before it learns the verifier's random value r. If the prover can choose its data value and proof in a way that depends on the verifier's randomness r, the soundness guarantee no longer holds. There is also an efficient attack. So I suspect that the stateless-verifier optimization is unsound. The attack works as follows: given r, the verifier finds a non-satisfying vector x such that sumi ri C(Ai(x)) = 0, again using the notation from Theorem 5.3 in the paper [1]. For the circuits C used in most applications, it will be easy to find such a vector x. Then, the prover constructs a proof that x that sumi ri C(Ai(x)) = 0. The proof is valid, so the verifier will accept always. To remove interaction, the Fiat-Shamir-like technique in Section 6.2.3 is probably the best way to go. We only sketched the idea in the paper (without formal analysis or proof) but if it's going to be important for your application, I'm happy to sanity-check the protocol you come up with. Another idea—the details of which I have not checked carefully—would be to modify your protocol as follows: The prover sends its data vector x to verifiers, split into k shares x1, ..., xk—one for each of the k verifiers. Each verifier holds a PRF key ki computes ri = PRF(ki, xi) and they return r = r1 + ... + rk to the prover. The verifiers store no state. The prover computes the proof pi and sends (x1, pi1), ..., (xk, pik) to the verifiers—again, one for each of the k verifiers. Each verifier computes ri = PRF(ki, xi), they jointly compute r = r1 + ... + r_k, and check the proof using randomness r. The Fiat-Shamir-like approach seems slightly cleaner to me, since it only requires a single prover-to-verifier message. Either way, please let me know if I misunderstood your question or if anything else here is unclear. [1] URL\nThanks for the nice explanation, Henry. It's perfectly clear and answers my question. It also points to something that I, for one, missed the importance of: the need for the prover to \"commit\" to the input shares before generating the r-dependent proof. I appreciate you pointing this out. The only requirement for making upload start not idempotent is so that the randomness r can be sent to the client. It would be nice to do away with this requirement. I think the way forward is using the Fiat-Shamir technique. NAME I'll get cracking on this and let you know when I have something for you to look at.\nNAME I believe the change is something like this. Let be a hash function with range . Suppose there are verifiers. The prover proceeds as follows: Splits its inputs into shares . For each , do: Choose blinding factor from at random. Let . Set . Derive field element from (e.g., seed a PRNG with and map the output to a field element by rejection sampling on chunks of the output). Generate the proof using and . Split into shares . For each send to the -th verifier.\nYes, this looks right to me!\nAs of URL, the upload start request no longer exists. When we're ready to specify Prio, we'll want to include the Fiat-Shamir approach for protocols that use joint randomness. I've noted this and am closing the issue."} {"_id":"q-en-draft-ietf-ppm-dap-3a313dfd6e549ce8c8559ab7ee7c261e1ea21f6df378ce27f3773f9bd1e68329","text":"I added a clarification that the nonce is random. IMO the question of who randomly generates the nonce goes with the question of how participants agree upon PDAParams, which is to say it's outside the protocol.\nOn the 2021/7/7 design call we decided that the ID should be derived deterministically from the serialized PAParam by, for example, hashing it. This solves a few problems simultaneously. This settles the question of whether there should be 1:1 correspondence between a task ID and the parameters used for that task. NAME brought up an interesting question, which is how to make the task ID unique when two different data collection tasks use the same parameters. Thoughts?\nThe consensus here is to do SHA-256 over the PDAParams structure and use that as the 32 byte PDATaskID. However, hashing a protocol message requires us to decide on a canonical representation of a protocol message, which I think forces us to decide how to encode messages on the wire. I filed for that question. I'll proceed with a PR for this issue on the assumption that we'll pick JSON.\nAs I mentioned there, the assumption would be that we're adopting TLS' binary encoding. If so, in TLS notation, you would write as the hash of , which itself is a serialized data structure.\nAll we require is that each PAParam have some unique value, right? A random nonce would probably suffice. r.e. the encoding, yes, let's stick with TLS's wire format as NAME suggests.\nYes, my thinking was that we would just stick 16 bytes of random data in (a UUID's worth). I agree that we should stick with TLS binary encoding.\nApproved, with the following suggestion: Let's leave a [TODO: ...] comment under the definition of for clarifying how it should be chosen."} {"_id":"q-en-draft-ietf-rats-reference-interaction-models-ffee840fb1de94fa6dc2be1f75abcb5b141519628d6b4db3b3f666796198b8ab","text":"… in diagram. Also fixed \"randomized\" (was British English before). Signed-off-by: Michael Eckel\nAn handle mustbe created/generated before it can be distributed. Adding a generateHandle() call would also strengthen understanding by readers.\nBased on URL In the sequence diagram, some elements in the bracket are not defined in the “Information Elements” section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined?\nI totally agree. These elements should either be put into Information elements (preferred), or at least be mentioned in the text. Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan"} {"_id":"q-en-draft-ietf-rats-reference-interaction-models-28a32388a09221ec70a2aa018f71814db30fe1f01843a6b2d593cd56fc1f75ad","text":"Signed-off-by: Michael Eckel\nBased on URL Previous section says “Attester Identity” is about “a distinguishable Attesting Environment”, and here says it’s about “a distinguishable Attester”.\nHow to understand “without accompanying evidence about its validity”?\nI think it means that there is no certificate conveyed along with it which proves its validity.\nI think this is related to and should be fixed.\nBased on URL The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\nThe TPM’s AIK certificate is one kind of Attester Identity, right?\n\"The TPM’s AIK certificate is one kind of Attester Identity, right?\": Correct.\n\"The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\" I see the point, too, and agree. So, perhaps it should be renamed to \"Attesting Environment identity\"!? The Attester Identity then should be something different. It may be the TPM AK/AIK of one of the Attesting Environments, but not necessarily.\nAs discussed, polish can happen later Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan"} {"_id":"q-en-draft-ietf-sacm-coswid-00de5703d4fa49c0bae69cf28e3de9f36acfa05ad3ee259c2aed84a41b75ee4d","text":"Addressed the following comments: [Per IETF 110] Add text on the order of native CDDL approach for handling version is used. If a future version of SWID is published which alters the semantics of exists fields, a future version of CoSWID SHOULD (MUST?) use a different tag to represent it. [daw: A new paragraph was added to section 2 detailing how index values are used to support revisions and extensions.] [Per IETF 110] see -16. Use ISO SWID reference for the IANA registry and if desired, used [SEMVER] as an informal reference in the text [daw: This has been done in section 4.1.] [daw: The FIXME markers have been replaced with actual text.] [daw: Section 7 now details the construction of a signed CoSWID. A SHOULD sign requirement was also added to section 7.] [Per IETF 110] We discussed adding language on the order of \"This document specifies a data format and an associated integrity mechanism agnostic of a deployment. Application which choose to use CoSWID will need to specify explicit guidance which security properties are needed and under which circumstances.\" > ** [-15] Section 6. Per \"When an authoritative tag is signed, the software [daw: Authenticating the signer is the software provider will require a match between a CoSWID entity and the signer. We are working out a means to do this.] 2 [daw: Not sure which tags this applies to? Signing? All of section 2?] [daw: done.] [daw: This confusing text has been removed.] [daw: done.] [daw: We added a note about this.] [daw: Done.] [daw: Added signature validation as a way.] Regards, Roman"} {"_id":"q-en-draft-ietf-taps-transport-security-8b7bd23d2f1f220ac3d1575f01f447e60be280a6abc8b14c0d8745c01e17165a","text":"Completes\nWe collapsed IKEv2 and ESP into IPsec. Should we remove ESP from feature lists, e.g., the list of protocols that support PSK import?\nAgreed, we should make sure this is consistent. One of our features, Key Expiration, currently only mentions ESP though. Is this actually an IPsec feature now, do we remove the feature, or is there another solution?"} {"_id":"q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b","text":"…/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs."} {"_id":"q-en-draft-ietf-tls-ctls-e437072bb69488a9223d17f3a7d2876ce15a9de47b7d001167f41638fc8449b9","text":"This PR looks good to me.\ncTLS enumerates keys in the template, but does not explain how templates can be extended. Suggestion: Clients must reject the whole template if it contains any key they do not understand. The “optional” key holds a map of optional extensions that are safe to ignore.\nThanks for addressing your own issue with a PR. Creating an IANA registry for the keys used in the configuration data makes sense. I would, however, like to point out (and maybe we need to make this explicit) that the configuration information is defined for convenience but is not a mandatory-to-implement functionality for a stack.\nSure, some use cases can pre-arrange, or even hardcode, a specific configuration through a nonstandard channel. Some use cases (where the client and server are developed separately) obviously cannot."} {"_id":"q-en-draft-ietf-tls-ctls-eaded40a0c0cd00de67ef008b831621cff59808741a70639f04f087b03d3e7ea","text":"FYI, this PR now moves out of the ClientHello back into (now called ). This avoids confusing questions about how to demultiplex profiles that use different modes.\nLGTM.\nThis would allow recipients to detect the end of a handshake message from the CTLSPlaintext header alone, allowing the recipient to reconstruct the entire handshake message before parsing it. Currently, a streaming parser is required in order to detect the end. Suggested implementation: Senders set contenttype = ctlshandshake_end for a message’s final fragment.\nI can see why you want that, but it's not generally a TLS property, and I'm reluctant to change things.\nAFAIK, TLS does not require a streaming ClientHello parser. The recipient can read (uint24) and then buffer the remainder before parsing. cTLS, as currently specified, does not have . This means that the recipient has to cross all the way down into the extension parsing code before it finds out that the ClientHello is still incomplete. Given the complex template-driven handshake parsing code, I think making a streaming parser required is ill-advised. Given the desire for compactness, I think a flag of some kind on the final handshake message record seems like a reasonable solution, replacing the uint24 length. It would also simplify Pseudorandom cTLS. However, if that's not appealing, you could simply reintroduce .\nNAME proposed a solution on slide 5 : I'm fine with that, so long as it doesn't end up requiring recipients to do something heroic to reconstruct the ClientHello. Thus, \"framing = none\" should imply that messages cannot be fragmented at all (max size limited to 2^14-1). I'm not sure I see the utility of the explicit distinction between \"length\" and \"full\". Surely \"length\" is not usable with DTLS due to reordering, and \"full\" is redundant in TLS/TCP. Perhaps we only need a single bit to enable/disable framing, and length-vs-full can be implicit from the transport.\nI could live with this. Objections?\nSolved by\nThe current text doesn’t mention this, although it seems relatively clear how it’s expected to work (using the DTLS Handshake struct)."} {"_id":"q-en-draft-ietf-tls-ctls-3e2ec919e0f3cb8af70fedb6b77f152336383ff2dc50e5163529d7c8a48cb11d","text":"We have decided to moved the compressed elliptic curve representations to an external draft since the issue is equally applicable to regular TLS/DTLS and does not need to be cTLS specific.\nNAME It looks like you need to run .\nIt seems that we have consensus to move compressed representations into new IANA codepoints, rather than try to reuse the existing codepoints with a template-driven encoding. That would make the question of compressed representations orthogonal to cTLS, so this topic can be removed from the draft.\nFixed by"} {"_id":"q-en-draft-ietf-tls-esni-b5986ce06daa28fc22b1bf16670dc8d38eff0d825583c7e4d8c41b32d5782e4c","text":"NAME NAME please have a look.\nWe also need to specify server-side HRR behavior, right? Specifically, the server MUST recheck the second ClientHello's ESNI extension before sending the server certificate, otherwise putting the key share into the AD field doesn't really mean anything. The attacker would be able to do a cut-and-paste thing. (We could also incorporate the nonce into key schedule, but that upsets the document's split mode use case.)\nYep! Thanks for bringing that up. I'll prepare text.\nThank you for the text. LGTM."} {"_id":"q-en-draft-ietf-tls-esni-e36437bf98bccecb0708af5c552ff3260a86aab034cadebc6d0a27a272af0275","text":"Because this document now delegates the way the keys are advertised from the RRType ESNI to HTTPSSVC shouldn't the RRType Considerations be removed from this document."} {"_id":"q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331","text":"I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH."} {"_id":"q-en-draft-ietf-tls-esni-c27929b83fcae7f0af06a53014d50efb17af703a814f023617f329e93332b81f","text":"This addresses . cc NAME NAME NAME NAME NAME\nThis looks fine, though I'll note that since an HTTPSSVC record is public and the outer ClientHello record is public, it's not unreasonable to pull values from there to put in the public fields since the observer already knows (or can get) them. Restricting them to come only from the ECHOConfig seems unnecessary. I think the key observation in this PR is that the outer ClientHello isn't really offering to establish a TLS connection -- it's a not-crazy-looking wrapper for the actual ClientHello inside the ECHO extension, and the vehicle for the recovery flow. The only things a server can legitimately do are respond to the inner ClientHello or do a recovery handshake. Thus, most of the things you might set on the outer ClientHello are irrelevant; they can be chosen to look innocuous, chosen at random, or copied from the inner ClientHello as the implementation chooses.\nNAME does this seem reasonable to you?\nSeems reasonable."} {"_id":"q-en-draft-ietf-tls-esni-bdf9e0081c4b23eca846ddb04e352e1a558754300a5ce990e6ba25cd1bc9fbf0","text":"Per NAME suggestion, this replaces computations involving the \"hash underlying the KDF\" with evaluation of the KDF itself. This respects the API defined in the HPKE draft. This supersedes .\nHey NAME and NAME I rolled up your changes! Please have another look before merging.\nRebased and rolled up.\nThanks -- let's let this stew until Friday, at which point we can merge.\nRebased.\nField is computed as \"digest of the complete ClientHelloInner\" (including presumably, the \"outer_extensions\" extension). I don't understand the analytical value of this hash. Can anyone explain its reasoning?\nThe hash is intended to prevent the attacker from influencing the reconstructed CHInner by manipulating CHOuter. Consider the contrived example of an extension which is separately protected but only works for SNI-A and not SNI-B. E.g., it's constructed by doing E(K_sni, SNI). If that extension were included in CHOuter and then imported into CHInner via this mechanism, then an attacker could substitute it and use a reaction attack to determine whether the CH referred to A or B. However, with the hash (which may be not well-specified), this attack gets caught by the client-facing server and the handshake is rejected without leaking information.\nMy initial understanding of this mechanism was incorrect. What happens is that the hash is computed over the ClientHelloInner that is consumed by the backend server. This is intended to ensure that if ECH accepts, then the backend server consumes the CH intended by the client. (PR URL makes this more clear, I think.) I wonder if this mechanism is redundant. Agreement on the CH between the client and backend server is provided by the binding of the transcript hash to the handshake traffic secret, so any attempt to manipulate the ClientHelloInner by fiddling with the ClientHelloOuter should be detected. On the other hand, the binding of the outer extensions to ClientHelloInner ensures agreement on the CH prior to the backend server consuming it. This is a strictly stronger property, which may or may not be useful. I agree that the attack works, but the example seems really contrived :) I'd be interested if there are more natural extensions for which this extra layer of security would be useful.\nFrom the interim meeting, the consensus seems to be that we should err on the conservative side.\nLet's close this issue and leave the mechanism as is.\nHrm, before closing I think it might be worth noting, in the spec, the stronger security property this provides.\nI think we should also consider binding the CHInner to a hash of the whole CHOuter (minus the ClientEncryptedCH extension). That would have the same size overhead, but would give us a stronger property: modifying anything in CHOuter (not just one of the copied extensions) invalidates the whole thing.\nThe point of hashing outer extensions is to provide integrity of the ClientHelloInner. What would be the point in hashing more of the ClientHelloOuter? (Not a rhetorical question :) It seems conceptually different, and I'm curious what the security benefit might be.)\nIf we authenticate the ClientHelloOuter, we'd be authenticating all the inputs that go into the ClientHelloInner, so I think that would also provide integrity. I expect Ben is suggesting this in the context of URL, where GREASE can additionally be made a little stronger if we did that. Getting that to interact well with any PSK binders in ClientHelloOuter is tricky though, unless we decide to ban resumption attempts in outer ClientHellos. (Probably best to keep the GREASE-related conversation in one place. Though I see I didn't help on that front because some discussion is also in . Ah, GitHub.)\nYep, I'm just echoing NAME observation. See also URL\nToday, the only thing that gets copied into ClientHelloInner is extensions, so I don't see how it's an improvement in terms of our primary security goal. (Not sure about the \"don't stick out\" benefit.)\nI don't understand how hashing all of CHOuter is going to work. In split mode, the server never sees any of CHOuter.\nYes, the hash would be checked when reconstructing the inner ClientHello (as here), and would not be present in the reconstructed ClientHello.\nThere's not much discussion here. Unless there's objection, shall we leave the mechanism as-is and close out this issue? cc/ NAME\nWorks for me.\nFrom Section 6.1: This assumes that the KDF specifies some underlying hash function. From the HPKE spec, it seems like all that's required of the KDF is that it's an \"extract-then-expand\" KDF: URL In fact, the currently assigned code points are all for HKDF: URL But there's no requirement that the KDF always have some underlying hash function. As an alternative to HKDF, one can imagine a dedicated primitive that satisfies the extract-then-expand API but is not constructed from a hash function.\nImpacted fields: ClientEncryptedCH.record_digest URL\nWe can probably replace with and be done with it.\nPutting on my theorist hat, I'd say that the extract function is a \"randomness extractor\" and not a collision resistant hash function. We want the latter in this context. From a practical stand point, using the extract function is more hashing than we need. I'd suggest doing away with (see ) and figuring something else out for . Could we simply require HKDF be used with HPKE?\nSure, but in practice randomness extractors use a collision resistant hash or PRF (CBC) under the hood. I don't think we can require HKDF for HPKE. That seems too restrictive. We might just say that ECH requires HKDF-based HPKE ciphersuites, as HKDF is already paramount for TLS 1.3, and then just reference HKDF's hash function.\naddresses this issue.\nAs I mentioned on the list, we can use the KDF in place of a hash.\nI tend to agree. I'll create a new PR.\nHave a look at , NAME\nResolved in . Closing."} {"_id":"q-en-draft-ietf-tls-esni-ed858a9c66a22dda0fb9f2dc7d26bdb94fa2606b7130cdaf40e8d1143fc199a9","text":"In places, the current draft doesn't make a clear distinction between rejection and decryption failure. In particular \"cannot decrypt\" can be interpreted either: \"the client-facing server doesn't recognize the configuration specified by the client\"; or \"the client-facing server recognizes the configuration but decryption fails\". This change resolves this ambiguity. Addresses issue .\nTwo sections suggest to ignore the extension on decryption failure, but one suggests to send an alert. What should happen? Ignoring the extension on unrecognized failures sounds reasonable in order to \"not stick out\". However, it is not entirely unambiguous what to do if a valid ECHConfig is found but decryption fails. says: says: says:\nI think the first two statements apply when the config_digest doesn't correspond to a known ECHConfig (i.e., \"I can't decrypt this\"); the last is when the ECHConfig is known, but decryption fails (\"I should be able to decrypt this, but the ciphertext is invalid\"). The text could be a lot clearer here.\nNAME Yes, your interpretation is correct. I think this is the result of a number of uncoordinated changes.\nClosing as resolved."} {"_id":"q-en-draft-ietf-tls-esni-6c44be55ce661cbdb95b252804b9f3a08a9d118b324ca708761542127ca33909","text":"There are two ClientHelloInners, the actual one used in the handshake, and the encoded one with outerextensions substituted in. Both are described as ClientHelloInner, but this makes it slightly ambiguous which goes into, say, the handshake transcript. (It's late now, so I'll leave this as an issue without a PR. If we don't end up resolving this separately, I'll probably fix this as part of . If we decide to implicitly copy over legacysession_id, the encryption payload becomes less and less defensibly an actual ClientHelloInner.)\nIn my own implementation, I've been referring to the plaintext payload as CompressedClientHelloInner. Which is terrible, but unambiguous.\nWorks for me. I was thinking EncodedClientHelloInner because it's shorter and still works even if you choose not to compress anything. (A no-op encoding is still an encoding.)\n+1 to EncodedClientHelloInner, given that compression is optional. NAME if you have time, could you whip up a PR?\nI noticed this while drafting a PR for . We're a little inconsistent about whether the ClientHelloInner or ClientHelloOuter is constructed first. Section 4 says inner before outer: Section 5.1 implies inner before outer: But section 6.1 says outer before inner: I think inner before outer makes more sense. You certainly can't finish the outer one before you've computed the inner one. (Of course, an implementation can pick whatever order is most convenient, but I think the spec should be consistent with itself.)"} {"_id":"q-en-draft-ietf-tls-esni-008f7b3176873586e3c9ff55c1595ad742178f2532beceef88f61aa977961cda","text":"But that is just how it is. We get to choose between that leakage and a known large amount of bustage.\nThis solution admits a straightforward, active distinguisher: just intercept CH1, respond with a fake HRR, and look at CH2. It's true that we \"give up\" on making HRR not stick out in . However, one of the design goals there is to prevent active attacks that trigger the HRR code path when it otherwise would not have happened. What we give up on is ensuring \"don't stick out\" when HRR is inevitable for the given client and server.\nAh, are you saying that this conflicts with (and thus would have to be removed if is merged)?\nI believe resolves the same \"OPEN ISSUE\", yes.\nI just noticed this as I was pondering our various HRR issues. RFC8446, section 4.1.2 says: URL The \"and present in the HelloRetryRequest\" portion is fun. In RFC8446 as written, we're not allowed to change the extension on the second ClientHello unless the server sent in HelloRetryRequest, which it doesn't. With the protocol as-is, it seems we'll at least need some text to deal with the contradiction. The impact is practice is thankfully limited. If the servers does not enforce consistency between the two ClientHellos, it doesn't matter. (They're not required to, but they're not explicitly forbidden from it either, and a stateless server may need to enforce some minimum consistency to avoid getting into a broken state.) If a server does, ECH's behavior on HRR will break it. Such a server is supposed to handshake with ClientHelloOuter, so it just affects the retry flow. That's maybe okay, but mildly annoying. Administrators using server software that checks need to know to first upgrade to a version that doesn't check before trying to deploy ECH. We can also add an ECH extension to HRR. Then, on ECH-reject + HRR, the server perhaps wouldn't send the extension and the client would be required to replay the extension. This is a waste in many ways, but would work. This would break don't stick out, but some threat models, don't stick out with HRR is hopeless anyway. (See URL) It would also be weirdly asymmetric between ECH and SH. That said, this constraint is also pretty obnoxious. For instance, currently adds a new padding extension codepoint, because the old one isn't defined for other messages. However, the RFC8446 allowance only works for the existing padding code point, so you don't want to send any other code points in ClientHello, lest the server force you to keep it unchanged in ClientHello2. So maybe we should confirm no servers did the fully strict check and undo that constraint? (Although, for , I personally favor the padding message strategy anyway.)\nNAME points out the Go implementation does check CH1 and CH2 consistency. Fortunately, it only checks known extensions, so it's still compatible with the ECH draft. URL Checking only know extensions is basically saying we should strike \"and present in the HelloRetryRequest\" in rfc8446bis and make sure everyone knows not to check this.\nI don't think we can just strike this text because the point of that text was that you couldn't like change your KeyShares if the HRR didn't tell you to. But I agree we can sharpen this in some sensible way.\nI figured that case was covered by \"defined in the future\", which I'm not proposing we delete. keyshare isn't defined in the future, so it falls under existing rules. The existing rules (bullet points over that one) only let you change keyshare \"[i]f a \"key_share\" extension was supplied in the HelloRetryRequest\". But perhaps that's a little roundabout. The rules I'm thinking of are (changes emphasized): Each extension gets to define how it reacts to HRR. It typically is sensitive the corresponding HRR extension, but doesn't have to be. RFC8446(bis) defines the rules for all existing extensions. Future extensions define their own rules. (And probably if they don't say anything, we assume they don't get to change.) As a server, you can make assumptions based on the rules for extensions you know about. You have to tolerate random changes in extensions you don't know about, because you don't know their rules.* I'm running a probe on a bunch of servers to see how they react to changes in cipher suites, curves, and a random extension. It seems most don't notice. Some (I'm guessing Go) notice cipher suites and curves. None notice a random extension. That suggests the change will be compatible.\nI spoke too soon. LibreSSL breaks if you change an extension they don't recognize. :-(\nSo draft-09 isn't tied up with this, I've uploaded URL as a bandaid for the potential compatibility issue. (I say potential because it's only a problem if the server checks unknown extensions and has different enough key share preferences from the client that it needs HRR. HRR is pretty rare, so it may not actually be an issue. For Chrome, at least, I don't think the particular servers I found actually trigger HRR. But it is still a risk for clients or servers with different configurations.)"} {"_id":"q-en-draft-ietf-tls-esni-24cd1e72ecadb1b6ca448a5df642d60122b9323c2b09e37e50636509aca16e5b","text":"We previously derived the acceptance signal from the handshake secret. This meant that clients which used the wrong ECHConfig might need to process ServerHello extensions twice before computing the signal, which can be problematic for some libraries. Given that the signal's secrecy is entirely dependent on URL, we can relax the signal computation and base it on the transcript alone, which includes URL, rather than a secret derived from that transcript. cc NAME NAME NAME\nYep, good change. Thanks, S.\nEncoding the \"ECH worked\" signal in the URL lower octets is, at best, ugly. It also arguably causes violation of internal API designs in at least the OpenSSL code. And I'm not sure if there might be some interaction between that and ticket handling (just from the look of some code I've yet to understand really.) Do we really need such a hacky solution? Given the ECH sticks out in the CH, we could just define a non-encrypted extension in the SH to include the confirmation value maybe (though even that is pretty hacky).\nI have some sympathy for this, but the point of this signal design was to make it difficult to distinguish GREASE from non-GREASE ECH.\nCould you elaborate on the concern with ticket handling? I'm not sure I see how tickets would relate to this.\nSorry for being slow - should get to looking again at that bit of code in a few days. S\nJust to synchronize the discussion across list/GitHub a bit, this was discussed a bit in URL I agree with NAME now that there's some fiddly interactions from two ClientHellos and ServerHello. I sketched out a possible change sketched out in URL We could compute ClientHelloInner...ServerHelloECHConf as in the draft right now. But then rather than feed that into the handshake secret computation, just rely on that transcript containing a secret URL and computing F(transcript). (Hash it down, KDF it with zero secret, whatever.) One observation is that, while this relies oddly on a secret URL, the existing construction does too. An attacker constructing a ServerHello gets to pick ServerHello.keyshare and, with knowledge of ClientHello.keyshare, compute the resulting shared secret.\nOne further observation here is that if the client uses the same shares for CHInner and CHOuter, the double key derivation piece doesn't really need to apply.\nFor my own reference, I just wanted to flag something that we've discussed on the list: the current acceptance signal interacts with PSK in a way that could easily cause a bug in an implementation that hasn't carefully vetted this interaction.\ndns records are 8 bit clean and rfc 6763 is an example of a TXT record carrying arbitrary data\n:+1: Base64 is not only complication. It is a space overhead for a large object like ESNIKeys. Seeing corruption of binary will be rare; we can detect them by validating the checksum and fallback to non-ESNI in such case. So why not go for binary? FWIW, base64 was introduced in .\nThis is solved in which removes base64 with new ESNI RR type.\nWe only added this since TXT records were claimed to not work well with binary, per Christian’s suggestion. I’m fine either way.\nThanks for the info. I found URL, in which NAME asks: We know that we can store more than 128 octets in TXT record, and that we have a precedent that uses TXT to store binary data, as NAME has pointed out. Regarding configurability, my understanding is that RFC 1035 defines how binary TXT records can be specified in a zone file, and therefore many DNS server \"softwares\" support having such records. Googling also tells me that managed DNS services like Route 53 and Azure DNS provide the capability, although surprisingly they use different escape sequences ( uses + 3 octal, uses + 3 decimal).\nI do not think it’s a complication, and I’m not convinced space is an issue as per your comment above.\nonly tangentially related - NAME do you know why your URL txt record is broken into 127 char strings instead of 255? Is there a bug being worked around somewhere?\nNAME Oh I did not notice that. I am using tinydns (of djbdns), and it seems like tinydns works that way (see URL).\nNAME The issue about the size is that with base64 we might hit the 512-octet threshold when publishing keys of multiple strengths. As an example, query for URL currently returns a 186-byte response, which contains just secp256r1 key. If we add secp384r1 and secp521r1, I'd assume that the size will come near to 490 octets. In other words, for longer hostnames we will hit the 512-octet threshold if we publish three keys. If we avoid the base64 encoding, we will have enough room to store 3 ECDH keys that can provide 128-, 192-, 256-bit-level security. I do not think this argument is strong enough to prohibit the use of base64 for the following reasons: we will be using DoH / DNS over TLS between the client and the resolver we might be using DNS over UDP that has the 512-octet threshold between the resolver and the authoritative server, but we would want that to migrate to DNSSEC or something even better we might be just fine with publishing only two keys: X25519, X448 But it does make me prefer not to have the overhead of base64, if that is possible.\nFixed in .\nSeems fine to me. Christian Huitema wrote: computation per se. Actually doing trial decryption with the secret requires reaching down into the record layer. This is especially onerous for QUIC, where the record layer is offloaded to another protocol (often outside the TLS library). I think going back to full trial decryption would still considerably increase the cost of implementing ECH over the current draft. That said, I do agree that even the first part seems more onerous than expected and it's worth reconsidering things. We switched using just the randoms to the full handshake secret to mitigate some issues. Perhaps there's an intermediate point that still mitigates those issues but avoids the expensive part of the secret computation? What if we kept the ClientHelloInner...ServerHelloECHConf transcript as in the current draft, but derived the signal exclusively from that? (Or from the client random and that transcript, though the transcript includes the client random anyway.) We'll still bind in the ServerHello extensions, and it still includes the encrypted inner ClientHello random, as in the randoms formulation. It doesn't do the job as clearly as the handshake secret did, but maybe it'll work? Implementation-wise, this variant would not require actually processing the ServerHello extensions. David\nHi all, I raised a github issue [1] related to (but different from) this but would prefer a discussion via email if that's ok. Depending how this goes, I can create a new GH issue I guess. ECH draft-09 (and -10) requires the client to verify that the server has processed ECH by comparing 8 octets in the URL with a value one can only know if you know the inner CH octets and (pretty much all) the rest of the h/s so far. That seemingly removes the need for the client to maybe decrypt twice (once based on inner CH, once based on outer) to see what works. That can be implemented ok for the nominal case. As far as I can see that was the result of github issue [2] back in Aug/Sep 2020. But in error cases that seems to require the client to possibly process the same ServerHello extensions twice, which could have side effects, is generally ickky and I think shows that this aspect of the design maybe needs a change. That's not much different from trial decryption really. In my current, (and still horrible;-), OpenSSL code the minimal effect is memory leaks for error cases that are tricky to handle, but can be handled, although at the expense of probably making the code even more brittle. (Note: I'm neither defending nor attacking the way OpenSSL handles extensions here:-) The specific problem is one needs the handshake secret to calculate the 8 octets one expects in the URL, but generating the handshake secret typically involves a key share ServerHello extension and might in future involve some PQC stuff, so I think we can't say for sure which subset of extensions are needed for the calculation. And if the first time doesn't give the right answer (e.g. if the client used the wrong ECHConfig) then you gotta process the ServerHello incl. its extensions a 2nd time. I think the requirement we ought try meet here is that any confirmation value needs to be something the client can calculate using values known before the SH is rx'd plus the ServerHello as a (mostly) unprocessed buffer. That could be e.g. some hash of the encoded inner CH, ECHConfig and the ServerHello maybe. Some of those issues were discussed in [2] already and in an interim meeting I missed but I reckon it might be worthwhile thinking again. Thanks, S. [1] URL [2] URL"} {"_id":"q-en-draft-ietf-tls-esni-049ecb5acc2a251d725e8956323b49b739c270892e436a6bd6c5226864f196cc","text":"The overview was a bit long, and slightly inaccurate. The ClientHelloInner is encrypted after (most) of the ClientHelloOuter is constructed. I've also merged the ECH-naive server case with the ECH rejection case. This is more consistent with the client text, and most other TLS extensions, where we do not distinguish between reasons why the extension could not be negotiated. I've also generally tightened up the wording to make it shorter. As part of that, I'm trimmed the repeated list of sensitive ClientHello extensions. ECH itself is (mostly) agnostic to which extensions you believe are sensitive, and we already gave examples in the introduction."} {"_id":"q-en-draft-ietf-tls-esni-bf698c528d95001f2b76a1f2735aa737617d5cca8185ccaaaddc291a9204f9e6","text":"Current text says: the \"encryptedservername\" extension. However, I am not sure if this is the desired behavior. I would rather suggest encrypting even when the hostname exceeds the padded_length. Having some protection is better than none. The issue is that in certain deployments it is impossible to guess what the maximum length of the hostname will be. Consider . It uses a wildcard certificate. Anybody can add a new hostname of any length by creating a new Github account (it might be true that Github some limitation on the length, but that does not necessarily mean that Github shares the maximum length information with the fronting CDN, assuming that it uses a CDN).\nI have filed that fixes another issue (it's just a clarification of an open issue) related to wildcard delegation.\nTechnically, it is possible to guess the maximum length of the hostname, as that maximum textual representation length is defined as 253 bytes per RFC 1034.\nYes. So the question is if a CDN that has a wildcard mapping needs to set to 253 (since current text is a MUST NOT for sending a server name longer than the paddedlength), or if we should introduce a different rule.\nHow about something like: padded_length : The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the fronting server expects to support rounded up the nearest multiple of 16. If the fronting server supports wildcard names, it SHOULD set this value to 256.\nNAME Thank you for the suggestion. I was feeling uneasy (and am still feeling a bit) about requiring the client to send an ESNI extension as large as ~300 bytes, assuming that CDN's would have at least one wildcard mappings. But it's about the first packet sent from the client, and we typically have enough room even when 0-RTT data is involved. So I think that we can live with that.\nTo be\nThank you for working on this! LGTM aside from the two comments shown below.Sorry, I missed the padding thing, so you'll need to adjust"} {"_id":"q-en-draft-ietf-tls-esni-7e24339502e1baa87a941d000b2d6a7b674e57145903b9a4d3a1c1be66b756ee","text":"The goal here is really to provide more clarify about cardinality, if there's a better way or place to do that, that'd be fine.\nI don't believe this is necessary. This idiom is well understood in TLS and this just appears to be excess verbiage.\nHiya, (As with all these PRs, I'm ok if they're not accepted...) For me, that is better than the original, but I do think we ought state the expected cardinality that implementers need to handle somewhere. I reckon it's better to not assume implementers are very familiar with SVCB as for libraries at least, the DNS stuff happens elsewhere. They will have to figure that out eventually, but better to say it somewhere explicitly I think, or we may face some important code base that doesn't handle >1 public key when it ought. Cheers, S."} {"_id":"q-en-draft-ietf-tls-esni-bdf933f9e753440b51c3792b9cf80388d4b8885f9c651f5b948a3b4c8383a3cc","text":"As previously written, the wording could be understood as referring to an application using some other (D)TLS mechanism to indicate which ECHConfig candidate value selection method to use."} {"_id":"q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729","text":"This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\""} {"_id":"q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82","text":"I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (§ 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in § 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\"."} {"_id":"q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3","text":"NAME PTAL\nLGTM - ‘twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf"} {"_id":"q-en-draft-ietf-tls-iana-registry-updates-b08ef7a9b8277ebffca23ad4ac823943fd3a8d339f5ad806d5157743b22c9514","text":"4492bis includes some values we were marking as reserved and that's wrong. We also have 224-255 that we might also mark as reserved. Effectively, this closes these two registries.\nNAME PTAL\nNAME -- This looks good. I have a quick comment on: I presume this will or won't happen as the result of conversations in the WG? Also, I'll note -- for the (only) -- that only 224-253 are necessary to mark as reserved (or, really, since they were previously okay for private use, I think the term to use is \"deprecated\"), since 254 and 255 are in the TLS 1.3 \"reserved for private use\" range of 0xfe00-0xffff\nI'm planning to forward the PR to the WG after I heard from you. I'll add in \"deprecated\" for those other values before sending it along."} {"_id":"q-en-draft-ietf-tls-md5-sha1-deprecate-1c711116df666b628d7048bb25437a6af2a5fa2096ae12e5d08febedf450d6ab","text":"Addressing this . NAME NAME PTAL\nI am really growing to dislike these \"here is the alert you must use\" constructions. Are they necessary? A lot of this was born out of the deprecate TLS 1/1.1 RFC that spun this I-D off. I believe this is the thread back in mid-2019: URL spt"} {"_id":"q-en-draft-ietf-tls-ticketrequest-afc5668ba2c4422fe8a9325923b4428ef9af919324c280577093d4032db2d08f","text":"Good catch -- will fix. Good suggestion! We can certainly add this. As the extension is merely a hint to servers when deciding how many tickets to vend, I think this is out of scope for the document. It's intended to be once per session, and we'll add that. Also a good catch. Like other extensions that are not affected by the possibly updated ClientHello, it must not change. We need to specify the server side behavior, too. That seems reasonable to me.\nThanks! Approved pending"} {"_id":"q-en-draft-ietf-webtrans-http3-dd221859a061b8220b75f9c4747aeea232225f0f0b281ad901169689a85cec1e","text":"Require SETTINGSH3DATAGRAM and maxdatagramframe_size TP\nInteresting thought. One way to do that replaces the entire section with this: This is certainly more concise, although it does also remove some of the explanations that help people understand the various layers involved. Or perhaps we should include those same additional details but just turn those paragraphs into bullets?\nWhile technically true, I thought we'd said that we wanted to require all the features all of the time, but agreed, let's take that to an editorial issue if needed.\nMy point is that the server could support both regular HTTP and WebTransport. If a client comes in without webtransport settings and only sends GETs, that's fine. If a client comes in with webtransport settings and makes a webtransport request, that's fine. If a client comes in without webtransport settings and makes a webtransport request, that's not fine. In that case the server needs to reject the webtransport request, not kill the connection.\nAh, that makes more sense, thank you\nPer discussion at IETF 114, we should explicitly require support for each layer of capabilities that are needed. There was also a discussion about whether or not we needed to require datagram support, since requiring capsules technically means that you could use a datagram capsule and implementations may exist that do not want to have datagram support at all. Documenting some of that discussion in replies to this issue, as well.\nSpecific to datagrams and capsules: WT supported means that capsule protocol is supported, thus you must always support capsules. Proposal: You need to have datagrams in order to do WT over HTTP/3 Adding additional signaling to negotiate datagram support introduces a lot of additional complexity and different configurations that may harm interoperability. Dropping a datagram is about as easy as doing any other error path HTTP Datagrams must already be supported in the receive direction for capsules (and you need capsules for WT) either way Intermediaries will need to be able to either translate for upstream or ensure that upstream can provide an equivalent feature set More generally: We want to explicitly call out dependencies/prerequisites so that we don't need to teach layers about other layers. For example, WT being supported doesn't mean that the layers that implement capsules, H3 datagrams, or QUIC datagrams should have any idea about the existence of WebTransport\nDiscussed at IETF 115, consensus in room to require all the relevant settings and transport parameters to be sent individually\nSo we need: HTTP Setting: SETTINGSH3DATAGRAM (0x33) with a value of 1 for HTTP datagrams QUIC TP: maxdatagramframesize with a value greater than 0 for QUIC datagrams Capsule protocol support is implicit in the upgrade token, as explicitly called out by RFC 9297 or we can require to be sent on every extended connect request as well. to be added to the existing HTTP Setting: SETTINGSENABLEWEBTRANSPORT = 1 HTTP Setting: SETTINGSWEBTRANSPORTMAXSESSIONS > 0 (although we may be removing SETTINGSENABLEWEBTRANSPORT)\nSGTM I think we can skip mentioning Capsule-Protocol to match CONNECT-UDP\nWould this be easier to process as a list rather than multiple separate requirements?The list text in that comment isn't correct - the server shouldn't close the connection because it's legal for the client to not support datagrams if it only sends GETs and POSTs. However, the text in this PR is correct. I propose we land the PR as-is and can make editorial changes later."} {"_id":"q-en-draft-ietf-webtrans-http3-908f23682713bf5bca46b25fe3b5c6a20d07db3a5ecafdfad9629d818dc9fa30","text":".\ncc NAME NAME\nLooks great!\nFor a simple application that wants to send \"hello world\" over a WebTransport stream, it would take a minimum of 2.5 round trips for the data to arrive on the server. This is still an improvement over WebSockets but I'm wondering if we could do better. If the client is able to create streams prior to receiving the CONNECT response, then it would remove a round trip. The server would need to buffer any data received on streams associated with an unknown WebTransport session. If the server responds with a non-200 status code, then these streams are discarded. I believe 0-RTT is disabled for WebTransport because CONNECT is not idempotent and would be subject to replay attacks. An idempotent CONNECT would remove a round trip when 0-RTT is used... although I'm not sure this is a good idea.\nI think this is a reasonable optimization\nThe client requirement is odd in comparison to MASQUE / CONNECT-UDP which says \"client can risk sending datagrams before getting a 200 response, a different response means that datagrams will be discarded\". What is the motivation for the requirement in WebTransport? If you need some kind of ordering guarantees, then stream IDs provide that.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it.\nAfter the client initiates the WebTransport session by sending a CONNECT request, the server sends a 200 response and is free to create streams. This means that a server-initiated stream may arrive prior to the CONNECT response. Is the client able to infer that the session was successfully established, or does the client need to buffer this stream until it receives a proper response?\nSince there are no ordering guarantees across streams on the receiver side, I'd say you have to buffer until the 200 is processed.\nI agree with Lucas here. We also had a similar problem in QuicTransport (server has to buffer streams before checking the origin), and we handled it by buffering.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it."} {"_id":"q-en-draft-ietf-webtrans-http3-ec5602ae359a1700365d13672e64cac2a3be4cd7befbfcc0a39c330cb05efc12","text":"The H3 draft is light on error handling guidance. There are number of cases for bad session IDs: The session ID is not a client initiated bidirectional stream ID The session ID refers to a stream that did not perform a WebTransport handshake The session ID refers to a stream that has been closed or reset Presumably there is no error when a DATAGRAM arrives with a bad session id -- you just drop it? For streams, perhaps HTTPIDERROR is appropriate? Issues and both refer to cases where you receive a session ID that has not been created yet. Best to keep the discussion there, but it's likely we want to allow clients/servers to buffer some of these at their own discretion rather than require an error.\nFor 1, I think it should be an error. For 2, we may want to clarify that the headers should be already known at that point, but it should be an error. For 3, I am not sure how to enforce this without running into potential race conditions, so we may want to leave those as-is. As for the datagrams, I think h3-datagram draft should handle this.\nThinking about 2 more, I think there's also a race condition in which a data stream could be received between the stream being open and headers being processed. I'll write up a PR for 1.\nI think for 2, I meant the ID refers to a stream where the headers were known but it was not a CONNECT stream.\nDiscussed at IETF 111. Sense of the room for MUST close for 1, MAY close in other scenarios."} {"_id":"q-en-draft-ietf-webtrans-http3-1e981353d379c6e919464fa8fb99ab45d03ecb81a743b6d2c5b1b25a888a9226","text":"Capsule design team output. Minimal changes to HTTP/3, which was already using capsules and is now joined by . Same content as draft PR, thank you to those who have already reviewed!\nDoes this actually with a no-op, or are we supposed to discuss that at 115? Otherwise LGTM.\nIt as a no-op with the exception of one bullet point that we did want to address, that got split out into , which we'll discuss at 115 separately from this PR.\nShould the protocol share framing semantics with the HTTP/2 draft so that it is easy/easier to take datagrams or streams and put them on the CONNECT stream? Should it be possible to use that mechanism, or should it instead be forbidden? As much as possible, it would be desirable to have the forwarding logic be fairly uniform across different protocol versions. Right now, we have two almost completely different protocols.\nI was discussing this with NAME and NAME What happens when an H3 WT server receives \"version independent\" WT framing on its CONNECT stream. Since WT over H3 requires H3 datagrams, then there is no need to fold datagrams into the CONNECT stream. There shouldn't be a need to fold streams either, unless you want to defeat QUIC stream limits :) Allowing this would also increase work to implement an H3 only WT client/server: you have to re-implement QUIC over an HTTP byte-stream just in case the peer decides to send you \"version independent\" WT. Right now I think they are totally different and never the twain shall meet. Proxies can convert between the two versions. If we did that, should we have different upgrade tokens?\nIs this issue still relevant now that we have the outcome of HTTP Datagram Design Team?\nNAME gentle ping\nActually never mind, this is blocked on the capsule design team - the output of that design team will decide on whether the h2 capsules are allowed in h3 or not\nThe H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to. There's already a PR open () with a design idea.\nI'd like to propose a different model than the one in : The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID. To indicate WT is not supported, server sends 0. To allow for only 1 session ever, server sends 4, and never updates it. To allow for only 1 session at a time, server sends 4, and updates it by 4 as each session closes/resets. To allow for N parallel sessions, server sends 4 * N. and updates as sessions close/reset. This mirrors H3 PUSH, and requires at least one more frame (MAXWTSESSIONID). Perhaps it would also require CANCELWT_SESSION? A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport.\nI wanted to have a negotiation mechanism in , but my conclusion after talking with NAME about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not.\nThough we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection\nThat's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling.\nNAME since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable\nI'm also considering use cases for WT which are not tied to the JS API.\nNAME that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful?\nNAME : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling.\nNAME how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that.\nActually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time.\n(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance.\nI think my main problem with is that most people who want to disallow pooling want to exclude not only other WebTransport sessions, but also general HTTP traffic. was written with that assumption in mind.\nOk, MAXWTSESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem.\nAs I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server. Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation. A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this. Thus, I'm a supporter of option 3: MAXWTSESSIONS or similar.\nI agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit).\nChair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection\nNote that all of the below works when you swap \"server\" and \"client\". I wanted to focus on a poor Javascript client implementation, but it's equally possible to run into these cases with a poor server implementation. Let's say that you have a Http3Transport session that shares a connection with HTTP/3 or another Http3Transport session. The client has limited buffer space and hypothetically advertises MAXDATA of 1MB for the connection. If the server pushes 1MB of unprocessed data over a Http3Transport session, then it will prevent the server from sending data over the connection, including HTTP/3 responses and other Http3Transport sessions. This can occur if the application does not accept all remote-created streams or does not fully read every stream. This will be VERY easy for a developer to mess up. It can also occur if the application can not process data faster than it is received. In my case, the server creates unidirectional streams and pushes a large amount of media over them. The flow control limit would be hit if the Javascript application: Only accepts bidirectional streams (ex. initialization) Does not accept a unidirectional stream (ex. error or teardown) Does not fully read or abort every stream (ex. parsing error) Is not able to parse the media fast enough (ex. underpowered). This is mostly intended behavior, otherwise these \"bugs\" would cause unbounded memory usage. However, the blast radius is bigger when sharing a connection. In my use-case, hitting the flow control limit would also block any HTTP/3 RPC calls or unrelated traffic on the connection. This introduces more scenarios for the connection to become deadlocked or generally stalled. HTTP/3 implementations use MAXSTREAMS to limit the number of simultaneous requests. This use-case no longer works when sharing a connection with a Http3Transport session, and HTTP/3 implementations must instead return explicit error status codes when this threshold is reached. The same issues with MAXDATA also apply to MAXSTREAMS. A Javascript application that fails to accept or read/abort every stream will consume flow control resources. This can manifest itself in strange ways, such as HTTP/3 being unable to create control streams, or simply being unable to create new request streams.\nThanks for the salient overview, it makes it easier to understand your concerns. My first comment would be that some of these points apply to QuicTransport too. Its just that their made worse by Http3Transport pooling allowing the mashing together of orthogonal protocols with no application-layer oversight. Just as you might interfere with a page load, a browser could easily interfere with your finely crafted application protocol. I would anticipate that, due to different browsers having different HTTP usage patterns, and different pages being composed differently by time, region, etc, one might find that Http3Transport apps subject to pooling would experience inconsistencies and transient issues. You rightly point out that the connection flow control is a shared resource. But the problems described seem to relate to stream flow control and overlooks the different flow control setting afforded by the Transport Parameters. A possible mitigation is to more tightly constrain the initial flow windows of server-initiated streams so that they can't interfere with requests. However, I still think that the notion of pooling together WebTransports in an uncoordinated fashion is going to manifest undesirable properties. The MAX_STREAMS observation is a good one. I think the competition between request streams and client bidirectional streams is going to be hard to resolve.\nYeah, and enforcing any per sort of per-session flow control might get difficult because you can only identify the protocol and session after you receive the first few bytes of each stream. For example, assume a brand new connection (for simplicity only) and you receive a STREAM frame with id=100, offset=16, and size=1000. You don't know yet which protocol or session created this stream or the 25 preceding it. And hopefully you eventually get those first few bytes for each stream...\nChair notes from IETF 112 meeting: this needs more discussion (on the issue or mailing list)\nChair: discussed at 113, consensus in room to: use a setting to limit number of WT sessions (see ) figure out MAXSTREAMS, a way to limit the number of streams within a session abandon MAXDATA as too hard, do not limit amount of data sent within a session (anyone can open a new issue to revisit this discussion if they have a concrete proposal on how to do this) let QUIC take care of MAXSTREAMDATA\nCapsule DT is doing all of the above bullet points except the second one, for which I opened . Apologies for the extremely delayed response here, but hearing no objections we are declaring consensus and merging the PRs. David David Schinazi wrote:"} {"_id":"q-en-draft-ietf-webtrans-http3-e2099773314d4322d1850918ffc28556e0e0b84ffa12ef41f2287d02b68587f9","text":"In URL, it was suggested that 403 status code should be used when Origin verification fails.\nThank you for suggestions! Applied.\nThank you for reviews! If this looks good can someone merge this?\nLooking good, thank you! Added a few suggestions to cause it to generate links to the actual sections referenced, even if it makes it a little more repetitive.Approved modulo one small change"} {"_id":"q-en-draft-irtf-nwcrg-network-coding-satellites-0636abb5fa80dd0cf640ed1a5bec6dc2de9f266ebd57e22e81fe7886a9530a10","text":"Answer to issue\nA first sentence says quasi-error-free is usually guarranteed thanks to redundancy, while the following sentence tends to say the opposite. Not very clear.The intro fails to clarify the targets. Among them there is \"coding to recover from packet losses\" (to avoid confusion with bit-error corrections), but also \"coding to improve the usage of radio resource\". You should also clarify better what is meant by coding: it's not a content coding (e.g., to compress a video flow) but rather a computation of linear combination of packets. Say since the begining you'll rely on RFC8406 whenever possible. Parts of the text currently in section 3 (1st paragraph) deserve to be moved here.Typo: s/to cope from/to cope with/\nWe have updated the introduction structure and changed the usage of coding in the whole document. We hope this is better. cf PR and PR\nthis is cherrypicking, not discussing link/channel FEC (Error) or ACK. the heading should really be 'of network coding schemes'. \"to cover cases where where\"? At the physical layer, FEC mechanisms can be exploited.-- Erasure or Error? \"Based on public information, coding does not seem to be widely used at higher layers.\" it isn't, and that section can be reduced to just that sentence.\nVincent in item X1, 1st sentence, it suggests QUIC is a video streaming application. Change it.Fig. 2 raises a general question: it refers to upper applications that are by nature on the end-hosts. Everything is imaginable here and this is not specific to a SATCOM system. Why don't you limit yourself to the communication layers + middleware? I'm a bit confused.When you say: \"Based on public information, coding does not seem to be widely used at higher layers.\" I guess you limit yourself to the \"packetization UDP/IP + middleware\" but do not consider upper applications. It's also confusing.\nWe think that these comments have been assessed with PR\n\"However, physical layer reliability mechanisms may not recover transmission losses (e.g. with a mobile user) and layer 2 (or above) re-transmissions induce 500 ms one-way delay with a geostationary satellite\" Look, forward error coding (FEC) and link ACKs are complementary on a sliding scale depending on delay/channel quality, and this is talking solely about retransmissions. In fact, link or channel FEC do recover a large number of transmission bit/symbol losses without adding extra delay.\nWe have clarified this point in PR"} {"_id":"q-en-draft-irtf-nwcrg-network-coding-satellites-a894879bf8b0a849a61e8d198790422011b63bc15f5da987264f82fc1e7b9135","text":"I have no idea what this section means. Virtualised antennas? virtualised reflectors? Good luck with that. chin-nfvrg-cloud-5g-core-structure-yang referring to an expired -00 internet-draft does not add credibility.\nMJM : But on the virtualization comment: we had presentations in the group on NFV (network virtualization) with NC for satellites. I am sure that the authors of this work could add to Lloyd’s comments on that section.\nWe have added a research challenge and changed the title of section in PR . The section aims at identifying research challenge related to the interaction between NC and virtualization (function chaining, buffer management, etc)."} {"_id":"q-en-draft-irtf-nwcrg-network-coding-satellites-3bea717e3bc5664b5cf153fe428e9eb588f1ab7aff7a8b19108852a229987c3a","text":"First sentence, the use of \"these techniques\" is ambiguous. Which techniques?\nThanks. Updated in PR"} {"_id":"q-en-draft-webtransport-http2-835a26889e93c3238f9c2ecf91f4479174f9b2b9a599cec166ed9aafca39054a","text":"Reset stream doesn't cease retransmission.\nretransmission of WTSTREAM frames on the identified stream. A receiver of WTRESET_STREAM can discard any data that it already received on that stream.\" This is using TCP and should not talk about ceasing retransmissions.\nVery good point! Will clean up"} {"_id":"q-en-dtls-conn-id-857a88eac5f9380a92b4fd2b6bbc55fd8c1a627f10a35a68547da021b78c04b0","text":"As suggested by Éric Vyncke, mention that the new ciphertext record format provides ContentType encryption and per-record padding in both abstract and introduction.\nURL Update from Éric re: Section 6\nAbout \"-- Section 3 --, DTLS / zero-length CID\" \"An extension type MUST NOT appear in the ServerHello unless the same extension type appeared in the corresponding ClientHello.\" The \"zero-length CID\" therefore is mainly used for the very common cases, where only the clients are required to send such a CID, but not the server. Maybe adding a hint to RFC5246 7.4.1.4 will help?\nAbout \"-- Section 6 --\" Yes, that's still very complex: First, processing the record seems to be easier (less restricted), than use a changed source address for sending data back. Using a changed source address for sending data back to, is not generally solved. There are some special case, e.g., if the data sent back is very small, there is no risk (at least, none has explained one). There may also be some upper-layer mechanisms, which helps out (e.g. for coap). And there is a idea of a general solution , but that will require more time. Because of the required time, the idea was to release this spec without RRC and postpone that general work (see also issue ).\nalso in the abstract ? Some initial attempt at this in\nLGTM Éric Vyncke has entered the following ballot position for draft-ietf-tls-dtls-connection-id-11: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you for the work put into this document. This specification addresses the IPv4-mainly issue of NAT binding and is still required. I am also trusted the security ADs for section 5. Please find below some non-blocking COMMENT points (but replies would be appreciated), and some nits. I hope that this helps to improve the document, Regards, -éric == COMMENTS == -- Abstract -- As an important part of this document is the padding, should it be mentioned also in the abstract ? -- Section 3 -- While I am not a DTLS expert, I find this section quite difficult to understand the reasoning behind the specification as little explanations are given about, e.g, what is the motivation of \"A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID.\" -- Section 6 -- I am puzzled by the text: \"There is a strategy for ensuring that the new peer address is able to receive and process DTLS records. No such strategy is defined in this specification.\" Does this mean that there is no way to update the peer IP address ? == NITS == -- Section 1 -- Please expand CID on first use outside of the abstract. -- Section 4 -- Suggest to add a short paragraph as a preamble to figure 3. Currently, it looks like figure 3 belongs to the 'zeros' field description."} {"_id":"q-en-dtls-conn-id-633f533580defc629a14944ae340435a46a54ea6f68489427234ed3a6b625304","text":"Based on URL\nThanks!\nMartin wrote: URL LGTM. I would strike \", if these privacy properties are important in a given deployment\" from the acknowledgments section (which is an odd place for the accompanying statement. I would add an explicit note about the lack of CID update making this unsuitable for mobility scenarios. That's a common use case for this sort of mechanism, but this design lacks any defense against linkability. >"} {"_id":"q-en-dtls-conn-id-55ee1d54fcc0048c80dfaa78e58d8082adbf60627f6d95a22ef13572f6d088f3","text":"We incorrectly put a normative reference on the RRC draft in the DTLS 1.2 CID draft.\nI lost somehow the track on that RRC and the discussion around it. The referred document mentions: and I would assume, that a on-path adversary could black hole traffic in both directions, even without CID. Not only message sent to a peer could be dropped, it would also be easy to drop a message sent back from that peer (at least, if the route back is on the same path). I would also assume, that a on-path adversary, which is able to change the source address, would also be able to change the destination address. So a simple traffic redirect will also be possible even without CID. In my opinion, the only threat left, which is caused by the usage of a CID, would be a DDoS attack using amplified traffic towards third party. For that it seems to be most important to throttle the traffic by limiting the number and size of messages until the address is verified. Though such a DDoS will be naturally less efficient, because the adversary has to wait for messages to change the sources-address, throttling would make it even more unattractive."} {"_id":"q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec","text":"This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired."} {"_id":"q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f","text":"and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit."} {"_id":"q-en-dtls13-spec-f0fd5c9cf863f5428524c0206bb7947aa72271ac6acf4e8841868e59128c19ba","text":"I am OK with this change. There are no normative changes in there because the normative text segment is only moved to a different paragraph.\nI was referring to this added clause: \"alternately it MAY report PMTU estimates minus the estimated expansion from the transport layer and DTLS record framing.\"\nWhat is awkward here is that this is phrased as normative requirements on an API. Maybe worded as advice \"an application that uses DTLS might need information from DTLS to allow it to estimate the PMTU; DTLS could report a PMTU estimate less any DTLS overheads, or it can report those overheads and allow the application to perform its own estimation\"\nThere has historically been API language here, so I think we should keep it and just adjust it as here.\nThis one probably needs consensus given new normative language."} {"_id":"q-en-dtls13-spec-313ca7a576a5d70dd359fe1d60d86ed438a3d44b870a24f8f1efab76a66532e6","text":"You don't want to age out before you get epoch 3 because otherwise you might be dropping when you have handshake packets.\nFor my understanding: what is the harm of keeping the 0-RTT keys around till the end of the handshake?\n\"Yes, the goal is to encourage people not to keep 0-RTT keys too long, because they might not have PCS. The replay risk is essentially a non-issue, except to the extent that it might already have been replayed.\""} {"_id":"q-en-dtls13-spec-55373f0b18e12002d098bf73ab5f36956fc269de165e66fc24fdf7c01c0e1592","text":"It's possible to negotiate the CID extension but that you want to receive an empty CID (this is how you get asymmetrical CID lengths). The current text says: CID. If an implementation receives these messages when CIDs were not negotiated, it MUST abort the connection with an unexpected_message alert. But does this mean that (for instance) you can negotiate receiving an empty CID and then switch to a non-empty? I am tempted to say \"no\" as I recall QUIC did.\nDoing what QUIC did seems prudent ... and if I didn't know what QUIC did I think I would also lean towards disallowing switching between empty and non-empty on a live connection."} {"_id":"q-en-dtls13-spec-c5d9f49320c2d2dc68b9aa601f1c9affd14f68b1a63d9bb9feb4eb57d2880afd","text":"Thanks for this document -- badly needed and easy to read. Thanks also to Bernard Adoba for his recent tsvart review. I support his comments. COMMENTS: Sec 2. It might be useful to introduce the term \"epoch\" in the glossary, for those who read this front to back. Sec 4.2.3: \"The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process.\" The text is unclear if the XOR is applied to the expanded sequence number or merely the 1-2 octets on the wire. I presume it's the latter, but this should be clarified. Sec 4.2.3: It's implied here that the snkey rotates with the epoch. As this is different from QUIC, it's probably worth spelling out. Sec 5.1 is a bit vague about the amplification limit; why not at least RECOMMEND 3, as we've converged on this elsewhere? Sec 5.1. Reading between the lines, it's clear that the cookie can't be used as address verification across connections in the way that a NEWTOKEN token is. It would be good to spell this out for clients -- use the resumption token or whatever instead. Sec 7.2 \"As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight.\" I think this should be s/entire flight/entire previous flight? Sec 7.2 \"Upon receipt of an ACK that leaves it with only some messages from a flight having been acknowledged an implementation SHOULD retransmit the unacknowledged messages or fragments.\" This language appears inconsistent with Figure 12, where Record 1 has not been acknowledged but is also not retransmitted. It appears there is an implied handling of empty ACKs that isn't written down anywhere in Sec 7.2 Sec 9. Should there be any flow control limits on newconnectionid? Or should receivers be free to simply drop CIDs they can't handle? It might be good to specify. Finally, a really weird one. Reading this document and references to connection ID prompted to me to think how QUIC-LB could apply to DTLS. The result is here: URL Please note the rather unfortunate third-to-last paragraph. I'm happy to take the answer that this use case doesn't matter, since I made it up today. But if it does, it would be very helpful if (1) DTLS 1.3 clients MUST include a connectionid extension in their ClientHello, even if zero length, and/or (2) this draft updated 4.1.4 of 8446 to allow the server to include connectionid in HelloRetryRequest even if the client didn't offer it. Thoughts? NITS: s/select(HandshakeType)/select(msg_type). Though with pseudocode your mileage may vary as to what's clearer. s/consitute/constitute Sec 5.7 In table 1, why include one ACK in the diagram but not the other? It's clear from the note, but the figure is a weird omission.\nAddressed in URL"} {"_id":"q-en-dtls13-spec-58a884f3919d39b01da204ca4458455ab048c6fbf38d962598183b8fd420ab38","text":"NAME here is another try at this.\nI did not realize that the intent of your text was to allow out of order processing. IMO that's off the fairway and we shouldn't allow that any more than we let you reach into the TCP stack and pull out forward messages.\nNAME I don't think the comparison with TCP works - the handshake protocol is internal. We should not force a particular implementation unless there's good reason to. In this case, as explained on the list, there's potential merit of allowing out of order processing of very large handshake messages with known structure. NAME What's your opinion?\nAgain, this is a last minute change during Auth48, so the bar is much higher than \"potential merit\"\nThis isn't motivated by security-related reasons like , so I agree with NAME that this does not raise to the level of an AUTH48 change. I propose we land this PR and close out .\nThanks NAME NAME for considering this in the first place. As I understood NAME since this does not impact the wire-format, we can revisit once the RFC has been finalized?\nOnce the RFC is finalized, it's final. Changes can be requested via the , or via a new document that updates the RFC in question.\nOk, thank you. I was referring to NAME comment on the list\nYes. What I was saying is that you could just write an RFC that said \"you can process out of order\"\ncorresponding to the next expected handshake message sequence number, it MUST buffer it until it has the entire handshake message. As discussed in URL The spec mandates buffering all handshake fragments, but some implementations (e.g. GnuTLS) choose to implement simpler reassembly schemes. It has been argued by Ekr that such should be discouraged because they aren't efficient. On the other hand, as Rich Salz remarks, the choice of reassembly mechanism isn't something the peer could tell (the effects of poor reassembly performance are more retransmissions, which could also be due to poor network reliability), so it does feel like we're overspecifying things here. Weakening the MUST to a SHOULD isn't good either as it allows dropping fragments altogether. There's a tension between the desire for a concrete operational guidance, which the current text clearly provides, and a more abstract formulation along the line of \"implementations MUST use a reassembly scheme ensuring forward progress\". As suggested by Ekr, taking this off the mailing list and continuing discussion here. To be clear: This has no bearing on the protocol itself, it's about how much implementation details the spec should enforce.\nAs mentioned on the list, I don't like the in-order qualification. I agree it's debatable whether there will be cases where it makes sense to process data out of order, but (see list again) it is not inconceivable, and I see no reason to exclude this here. I would therefore prefer to drop the in-order qualification. Hi all, Both DTLS 1.2 and DTLS 1.3 mandate: Can someone explain the underlying rationale? It seems that in the context of very large key material or certificate chains (think e.g. PQC), gradual processing of handshake messages (where possible) is useful to reduce RAM usage. Is there a security risk in doing this? It would also be useful for stateless handling of fragmented ClientHello messages. I'm sure this was discussed before but I don't remember where and who said it, but a server implementation could peek into the initial fragment of a ClientHello, check if it contains a valid cookie, and if so, allocate state for subsequent full reassembly. That wouldn't be compliant with the above MUST, though, as far as I understand it. Thanks! Hanno IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you."} {"_id":"q-en-dtls13-spec-40f88461eb3a402b92b324ee6c2a564a0c50d98a42850937797e1f46d6fe00eb","text":"The draft states in three places that DTLS use explicit sequence numbers, while this was true for DTLS 1.2, the sequence number in the DTLSCipherstructure is a mixture of implicit and explict. Maybe \"Partly Explicit\"?\nPlease provide me with a PR to add your name to acknowledgements.\nHi, I don’t know what PR is an abbreviation for but I guess it is not clear from my username who I am. I should probably update it, I did not intend to be anonymous :) John Mattsson, Ericsson, EMAIL"} {"_id":"q-en-eap-aka-pfs-06fd8aa05a9b29f976b2dbe155f18bb3203295f5784507c7091d345828036af0","text":"Suggested fix for\nKarl: -- In Section 7.4 it says: \"The Initiator and the Responder must make sure that unprotected data and metadata do not reveal any sensitive information.\" Should it be put in a less requirement-like form? For example: \"unprotected data and metadata can reveal sensitive information and need to be selected with care.\"?\nI agree that the metadata text should not be requirement form as much as it is now – there’s little that implementations can actually do about this, e.g., about the network name. Will make a PR."} {"_id":"q-en-edhoc-77bffc2add05b825f3a3b8361678bbc5d5433754df5354fcf33b50e990ab3dd6","text":"I think this can be merged.\nFixed the problem with ':' in refs. Now merging.\nBased on Stephens review\nReferring to this in :\nFrom the PR : \"As explained by Krawczyk, any attack mitigated by the NIST requiment would mean that the KDF is fully broken and would have to be replaced anyway.\" I didn't understand \"any attack mitigated by the NIST requirement\", and how mitigation could imply that something is broken. As far as I understand, the statement is something like \"if there is a problem deriving secret and non-secret material from the same KDF, then there is a problem with the KDF\" Perhaps \"an attack based on the violation of the NIST requirement is an attack on the KDF itself\"?\nYes, it was likely meants to say \"if any attack is mitigate\" Let's reformulate according to your suggestion\nI made a commit to correct spelling and grammar. I don't think we can give applications more guidance here. How to gererate e.g. private key in certificates are way outside of EDHOC, and it would anyway be hard to say something general as deployments might look very different.I did not want to say anything about randomly generated conenction ID as that is not how connection IDs is meant to be generated. They are intended to be as short as possible. An implentation is allowed to use long random bytestrings but is not recommended in general.\nPR merged"} {"_id":"q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644","text":"There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence … Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes … Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with"} {"_id":"q-en-edhoc-41ccfa1ea09177fbc6f24948d66bb507fb2870ed2cc18f447685f7d5758c7927","text":"Further security considerations about how message content is protected against manipulation. Perhaps something like this: \"Manipulations of message1 may not be detected at reception since there is no MAC but, since message1 is included in TH2, if I and R disagree on message1 then subsequent PRKs and MACs will be different. Manipulations of other messages are detected at reception through the MACs, except for manipulation of the encrypted padding of message2, since PAD2 is not integrity protected.\"\nI made a PR for this\nI merged to prepare for submission on -15. is just a simple clarification.\nThe selfie attack mitigation implies that an initiator is stopping the exchange if it receives its own identity. An active attacker can then use this behavior to test if an initiator identity is the same one as a responder identity. In turn, it implies that the initiator anonymity does not hold against an active attacker when this mitigation is enabled. To judge if this is an issue in practice, the following question should be answered: Are there scenarios where an identity is shared between an initiator and a receiver, but the initiator identity should still be protected? One could maybe think about the case where multiple initiators and receivers are on the same server, sometimes sharing public keys and using distinct ports, but it is expected that the initiators identities are still protected. Possible actions: mention privacy loss due to the selfie attack mitigation. update the selfie attack mitigation, by mentioning that initiators should conclude the exchange as usual, but can check at the end whether they are talking to themselves or not. remove the selfie attack mitigation. Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, and the initiator will be talking to itself while it believes to be talking to the receiver. When there are only public keys, as in EDHOC, it is unclear to me if what is currently called in the draft is indeed a selfie attack. An initiator talking to itself knows it and does not believe to be talking to some other identity. As such this is why it could be possible to either remove the mention of selfie attacks in the draft, or update it.\nI and R are expected to stop the exchange it they receive any identity they do not want to talk to. This likely includes their own identity. Good point that this leaks information. I think that should be added, but I don't see that the own identity is special. >Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, >and the initiator will be talking to itself while it believes to be talking to the receiver. >When there are only public keys, as in EDHOC, it is unclear to me if what is currently >called in the draft is indeed a selfie attack. The selfie attack mitigation is intended for Trust On First Use (TOFU) use cases, where there is no authentication in the initial setup. I don't think an initiator talking to itself in a TOFU use case would not know it except if they checked R's public key. Do you some better suggestion than \"Selfie attack\". I agree that it is not perfect. \"Selfie attacks\" has in older papers been called reflection attacks, but the term reflection attacks has also been used for very different attacks. One option is to not give the attack any name and instead just describe the attack.\nI think I like to not give any name, as it does not perfectly match the classical definition. Here, the property which is violated is \"If an EDHOC exchange is completed, two distinct identities/computers/agents were active\". Which is different to violating the authentication of the protocol as is the case in classical selfie attacks. Good point. So what we are talking about in this issue is a slightly different privacy leak, which happens when the set of parties you are willing to talk to is equal to all responders minus yourself. To be more concrete, let me try to put it in a kind of drafty formal description that could replace the existing formulation. First the existing formulation: Now a possible replacement:\nThanks for the input! This issue is related to the TOFU use case which is still to be detailed (Appendix D.5). We probably should wait with the update until we have progressed that.\nRelates to\nI made PR for this issue. This is also related to PR and PR . The suggested text in is largely based on the suggested text above but I made some changes. I did not think this fitted well with the rest of the draft. The sentence was also not really needed. This is only true in the case where the set of identifiers is a small finite set. In the general case the set of identifiers the Initiator is not willing to communicate with is infinite. I suggest to add the following recommendation for TOFU To not leak any long-term identifiers, it is recommended to use a freshly generated authentication key as identity in each initial TOFU exchange.\nI merged the PR to master. Keeping the issue open for a while."} {"_id":"q-en-edhoc-a09cce14f59f79c2946a63cd65661cedefa8a03a36393d3048fe306e29a2b4fd","text":"Should be checked for conflicts with\nI think fixing the comments above resolves conflict with , so can then be replaced by this.\nThis proposal for update in the key derivation description also addresses the clarifications requested in , thus replacing . We merge this to include it in -15 and enable further reviews.\nThis is not correct. How to derive and store PRKexporter is completely up to the implementation. One can derive PRKexporter everytime Exporter is called. This make it look like PRKout is overwritten. This should be formulated with and like TLS 1.3 or with and or something. How long to save the old keying material (PRKout, PRK_Exported, any derived keys) is likely up to the application. The importand thing is that they need to be deleted to give FS.\nThere was a request from someone that more guidance on the use of KeyUpdate should be expanded. Maybe not clear how you are supposed to alternate KeyUpdate and Exporter to get FS.\nAs you are considering making the KeyUpdate more detailed, I have a general design question. Are there use cases where one would expect Post-Compromise Security? That is, if one of the participant's PRK_out is compromised, a successful KeyUpdate mechanism between the two honest parties would heal them and lock out the attacker That is typically what is met by the double ratchet from the Signal app. The classical way to achieve that is roughly to have each party send out for the update fresh DH ephemeral shares, and add the corresponding DH secret inside the key derivation. (it would of course require more care than that in the details)\nI think part of the plan is to make it less detailed and leave some stuff to the implementation/application :) The current KeyUpdate mechanism only gives forward secrecy. If you exchange some nonces and use them as a context you might get protection against an attacker is not monitoring all your traffic. I think details of this should be specified outside of EDHOC like in the OSCORE KeyUpdate draft in the CORE WG. The double ratchet was discussed in LAKE WG at an earlier point. The conclusion was that it added to much complexity to specify that as part of EDHOC. To get this kind of security an application has to rerun EDHOC frequently (like TLS 1.3). Signal uses the Diffie-Hellman ratchet very often which makes it understandable to have on optimized Diffie-Hellman ratchet post-handshake. Contrained use cases of EDHOC can not do Diffie-Hellman as often as Signal so it was determined that requiring rerun of the full EDHOC was the best tradeoff. Should the EDHOC specification say more about the benefits of frequently rerunning Diffie-Hellman? I don't remember how much it says. I recently wrote a paragraph about this that was accepted to RFC8446bis.\nThis is not correct. PRKout does not need to be stored. An implementation can derive the application keys directly and never store PRKout. An implementation can also store PRK_exporter if they don't use EDHOC-KeyUpdate.\nThis is not enough. You need to be sure that the old key is not needed more.\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing.\nNo comments received, so I assume this is fine. (KeyUpdate is now in an appendix.)"} {"_id":"q-en-edhoc-d60bb37befdae133115c83f37d2e7b7521b2643b07175f5f697af59d2aded9f5","text":"Suggestion to clarify and maybe give example of randomness and unpredictability. The SHOULD should probably be removed. Unclear if 1 byte identifiers can ever by unpredictable.\nClosing this as approved by Marco.\nLooks good to me. NAME ?"} {"_id":"q-en-edhoc-432d91f8677a05ab8e13900533260e27107c4e42a4effeaf603ac42901ba99db","text":"Hi all, Our previous formal analysis on draft 12 and 14 lead to a publication at USENIX 23 available here URL As the two other previous formal analysis were cited in the draft, I'm proposing to add next to them our new one with this PR. As a side remark, our models are here (URL), and we are trying to update everything according to draft 17 with a new overall pass. It might be tight for IETF115, but we are trying our best! And anyway, it should only be positive results by now ;)\nNAME Thanks! Great that you follow up with the latest version. The latest relevant changes are probably in -16 and only adds things to the key derivation (some of which was proposed by you :-).\nThanks for the pointer! Though, the biggest issue is not so much updating the model of the protocol, but rather making a new full pass on the security claims and new appendices, trying to have a clear systematized link between claims from the draft and what we check in the model.\nNAME let's check if there are some other papers missing and what we want to say about them.\nWe should at least add references to work by ETH and ENS. Let's merge this as a starting point and continue discussion in .\nGiven the various security analyses EDHOC has seen and their insights being discussed in Section 8, would it make sense to add informative references to those (as done, e.g., for TLS 1.3, RFC 8446 Appendix E)? URL\nSeems like a good idea to me\nRelated to\nThis was discussed on the IETF 116 LAKE WG meeting, and in absence of volunteers to draft such an appendix we decided to start with adding references. We may want to go somewhere in between a whole appendix and a sentence, and expand with a paragraph about each reference. We still need someone to draft that paragraph for the different references and would like to conclude on this fairly soon. We start by merging now.\nClosing since is merged. Hi all, While we do not have any explicit comments on the document, we thought it is maybe still worthwhile to share the following analysis confirmation. We updated our computational analysis of the SIG-SIG mode, of which Marc presented an earlier version at IETF 114, to draft 17. We could confirm that with some of the later cryptographic suggestions (PR , ) included, EDHOC SIG-SIG achieves key exchange security (key secrecy and explicit authentication) in a strong model capturing potentially colliding credential identifiers, assuming, among others, an \"exclusive ownership\" property of the signature scheme (established for Ed25519, conjectured for the way ECDSA is used in EDHOC SIG-SIG). Our updated analysis is not publicly available yet at this point. Let us know if a public reference would be helpful for the draft RFC; we can also share the document individually upon request. The last point actually maybe does lead to a comment on the document: Given the various security analyses EDHOC has seen and their insights being discussed in Section 8, would it make sense to add informative references to those (as done, e.g., for TLS 1.3, RFC 8446 Appendix E)? Kind regards, Felix Günther (with Marc Ilunga)"} {"_id":"q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08","text":"Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!"} {"_id":"q-en-external-psk-design-team-aac111d6994cb0dfadc5045619feee021aab8e0687f06a85f62777195b55c91c","text":"The comment about how there are no guidelines about PSKs like there are about generating passwords seemed like a statement of the motivation for this draft, so I moved it to the introduction. (I was going to add a reference for the statement, but I couldn't figure out how to get the citation to compile correctly. This was the document I was going to cite, in case you think it needs it: URL)"} {"_id":"q-en-external-psk-design-team-d1c72ee3d62ad86a83b0ebed7da9ee2df74b95e3674e246be3eacaf0221158b2","text":"More = better. There are a few other places that this treatment might be applied elsewhere if the editors feel that way inclined. This seemed to be the most important.\nI'll do a pass for other obvious paragraphs that can be split up."} {"_id":"q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678","text":"Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments."} {"_id":"q-en-gnap-core-protocol-aeb1202dd6202ecdee07baa18233bf138580890cfba45a022b1829d56a4da074","text":"Deploy preview for gnap-core-protocol-editors-draft ready! Built with commit a5ab40750ac9435b370bab7ded6dbdc60b35b01a https://deploy-preview-208--gnap-core-protocol-editors-URL\nIt may make sense to change the order to something like == && is present == && is absent == == && is present\nI did changes for URL There are lots of other things that, I believe, could be improved & changed for the \"Presenting Access Tokens\" section and they will be addressed in separate PRs in the future.\nThank you, this is great! Aligns with the language in"} {"_id":"q-en-gnap-core-protocol-61a83002c6d411fdadd014a5560e3f1db1124312886ea919e31ffa9494094ddf","text":"URL match the flags array in the token request. While this is technically a breaking change, I think we should consider this editorial as this should have been changed when we made the original change to use a flags array in the request in .\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: 8c793bb446d82f053536358106485568fc457842 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYou know, I had that thought, and I checked the request section and it doesn't have quotes. I'm going to add the quotes to both parts. This ended up with some other language changing, not just the examples, so let me know what you think."} {"_id":"q-en-gnap-core-protocol-7e99d6cb0c5845f8f0c5da33844c044a06ff55f5557818ffe6d50f4bccf2229c","text":"Opaque identifiers provide a generic reference, so we can simplify the text. (mainly), and as a byproduct also and - removed userhandle from main response in section 3 changed title of section 3.4 (User -> Subject) and moved the text that requires RO = end-user at the top of the section, for clarity changed name (userhandle -> user) and wording in section 3.5. Maybe we could also simplify the title \"Returning Dynamically-bound Reference Handles\" into \"Returning Reference Handles\" (didn't do it yet).\nDeploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 73de700ce82846ec5e09f8f60550fac7bee0dfb5 Inspect the deploy log: Browse the preview:\nWith the adoption of the subject identifier format, we no longer need to have a GNAP-specific value and can instead leverage this more standard field. Alternatively, we could have a GNAP-specific subject identifier format, instead of a separate field.\nIn section 2.4 (Requesting User Information), the first sentence starts with: If the client instance is requesting information about the RO from the AS, ... From the title, the acronym \"RO\" should be changed into \"end-user\". The case where the RO is also the end-user is a specific case. This leads to the following change: If the client instance is requesting information about the end-user from the AS, ... In the following sentences of this section, RO should be changed into end-user.\nThe current text really means RO, and generally considers the case end-user = RO. Any other case would even trigger an error: \"If the identified end-user does not match the RO present at the AS during an interaction step, the AS SHOULD reject the request with an error.\" (section 2.4) Which makes sense because why would we return something about the RO to a potentially unknown end-user? (the only problem is that this wording is buried in the text, so it's hard to understand) Back to the general case: The RO is in control of policies at the AS. The end-user is in control of a client and a request to the AS. The end-user and client may or may not have any prior relationship with the AS. The subject info is what is known by the AS (in theory could be RO and maybe end-user, but notice that in the terminology, we define the end-user as a natural person and the RO as a subject entity). I think we should clarify when we could have a remote RO too. Then do we even return a result if the AS knows something about the end-user? (different from RO in that case). Plus we need to keep in mind that we also use sub_ids for URL In general, we might want to avoid the term \"user\" as it is not specific enough.\nThe title has been changed into \"Requesting Subject information\". This title is incorrect as long as a subject can be also an organization or a device. An AS can hold information about an end-user or about RO when the RO is a human being. It does not hold information about organizations or devices. However, an end-user may belong to some organizations or/and use some devices. IMO, I would replace it by \"Requesting end-user Information\". If an AS holds information about an end-user, it seems logical that the end-user should have access to it. If it happens that the end-user has also a RO account on the AS, then it should also have access to it. In such a case, another section with the following title \"Requesting RO Information\" should be created. RO = end-user was a common case in OAuth. In GNAP and in the general case a RO is different from an end-user and an AS may have no relationship with any RO (e.g. when attributes only are being used).\nI guess you refer to \"2.2. Requesting Subject Information\". We might have to harmonize a bit, but for instance \"2.3. Identifying the Client Instance\" might very well hold information about devices or at least the software running on it (that's the point of client software attestation to be discussed in ) We don't always know in advance whether it will be a RO or an end-user only, so the proposal isn't always applicable.\nI was indeed referring to \"2.2. Requesting Subject Information\" which has no relationship with \"2.3. Identifying the Client Instance\". This means that your arguments do not apply to section 2.2. and, by implication, that my argumentation currently remains valid.\nWell, I take the point in consideration but I don't think we should base our reasoning on the structure of sections, which can change\nIdentifying the User by Reference: Editor's note: We might be able to fold this function into an unstructured user assertion reference issued by the AS to the RC. We could put it in as an assertion type of \"gnap_reference\" or something like that. Downside: it's more verbose and potentially confusing to the client developer to have an assertion-like thing that's internal to the AS and not an assertion.\nWhy an assertion? Is that reference supposed to be linked to a session (cf \"represent the same end-user to the AS over subsequent requests\") or can it be persistent? I would see that more as an opaque identifier, rather than an assertion. Something like: asref seems a bit specific to GNAP (reference managed by the AS) but it could always be read \"as a reference\" in a more general setting. There's a variety of generic names that we could choose if needed, localuid, internal_uid, etc. But that really asks what additional types we could define as part of secevents (section 7.1.2). Beyond the case mentioned here, it would be most useful as an opaque identifier when RO != end-user, to avoid leaking private info about the RO (although currently we don't seem to cover that case yet) It's indeed more verbose than \"user\": \"XUT2MFM1XBIKJKSDU8QM\" but at least it's always the same structure (doesn't depend whether it's a reference or not), and it also makes the type more explicit.\nA subject identifier makes sense here, and I agree that it should be a new value to let us constrain it to the GNAP context.\nFixed by SECEVENT opaque identifiers\nI don't think this goes far enough -- I am thinking that with the opaque identifiers we can now simply return a with a format of that the client can be allowed to send in a follow-up request if it wants to. This way they all get treated equally, there's no special field anymore."} {"_id":"q-en-gnap-core-protocol-603d1b4130e0e882676ba3c5431f5d61d56f01280a6d55343905c825ac73f1cc","text":"In the current version, the AS must include an access token. \"if the AS includes an access token\" sounds like the inclusion of an access token is optional.\nDeploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: 323dd6c268368857c2d9942048f94658b806b8ad Inspect the deploy log: Browse the preview:"} {"_id":"q-en-gnap-core-protocol-8ce690da2381168416a76fe381afe818d7a7afc4d566090a47c2403731106580","text":"for\nDeploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: c7273a059b212bf18879f131a5a18d9a776a2918 Inspect the deploy log: Browse the preview:\nError Response: Editor's note: I think we will need a more robust error mechanism, and we need to be more clear about what error states are allowed in what circumstances. Additionally, is the \"error\" parameter exclusive with others in the return?\nAs a starting point, I would suggest adding these sentences The error code parameter MUST be required and its value SHOULD be limited to ASCII characters. The error code indicates that a request is malformed. The AS MUST return an HTTP 401 (Unauthorized) status code to indicate that it can't identify a client instance (the error code can be ). The AS MUST return an HTTP 401 (Unauthorized) status code to indicate that it can't authenticate a client instance (the error code can be ). The AS MAY uncover additional information about the cause of the error when it can authenticate a client instance (the error code can be , , and so on).\nAddressed by , still need error codes for things like token management.\nThe type of the of the error property in a response is currently ambiguous. In section 3 it is defined as an and in section 3.6 it is defined as a .\nNAME thanks for catching that -- looks like both \"error\" and \"errordescription\" should be added to the list of parameters. I think the intent might have been for the value of \"error\" to be an object with \"errorcode\" and \"error_description\" fields inside of it though, so it might be better to align things in that direction instead. Do you have thoughts on which is preferable? Interop testing of error conditions is really difficult in practice, so this is a great catch!\nYaron: The AS MUST ignore any unknown extension. Justin: I’m OK with this stance, but it needs to be consistent throughout the protocol, not just at this level. Yaron: we can actually have one policy for an AS that doesn't understand something defined in the RFC (e.g. a token binding method) vs. extensions. Unknown extensions clearly must be ignored, otherwise you cannot deploy extensions incrementally. For RFC elements, this is more nuanced.\nWith the guidance on extension points from and error codes from and similar, this should now be covered by the current text."} {"_id":"q-en-gnap-core-protocol-5dbaefbba5dd801c29d1376eb10cc372134b187e55e86bed6ca4ac8bbeaa8f28","text":"Expands the key parameter to allow it to be an object or a string, to facilitate methods with multiple parameters such as HTTPSig.\nTo edit notification comments on pull requests, go to your .\nSome key proofing methods have additional options that could be signaled in the GNAP protocol structure. Notable, HTTP Message Signatures has the ability to use different HTTP Signing Algorithms and different HTTP Digest Algorithms. The field could be changed to an object to accomodate this kind of use, allowing each proofing method to define its own parameters: Each method can also define \"Default\" values for missing parameters, allowing this to collapse in the simple case back to a string as it is today: These string values would have a deterministic expansion defined in the core protocol.\nNote that the httpsig document itself wavers between and and does not require either of them. Though maybe this is OK if the current document makes mandatory for the appropriate cases.\nThere are some updates pending in the Digest draft -- once that's out, we'll reference using as that's the most appropriate for our use case, and examples will be updated."} {"_id":"q-en-gnap-resource-servers-c70ea61fa2d12d69c6c48e0077da07fbe4cee9e6ebbb73e526ff4e77575ec7b6","text":"Deploy Preview for gnap-resource-servers-editors-draft ready! Built Explore the source changes: 611af53328d71bac58f517702c5775a9c514d88d Inspect the deploy log: Browse the preview:"} {"_id":"q-en-groupcomm-bis-d2b76741c208145052a1dedba86c17a88206cf1be734f26eb0b405dce27b260d","text":"text for proxy security cases\nReverse proxy may act as origin server and hence client doesn't use e2e security to server group. However, in other cases a reverse proxy may assume the client uses e2e OSCORE security and not add any security by itself. To consider all the different, possible/valid cases for this. And see if it's different for forward proxies.\nOne particular example is a client that connects over CoAP+TLS+TCP (or CoAP+DTLS) to the reverse-proxy, where the proxy uses OSCORE security for the multicast request. The client is not aware of the (group) security material which the proxy and servers have.\nAt high level, there are 4 different potential cases of \"proxying with security\". Forward proxy with e2e security [=Group oscore e2e via proxy] Forward proxy with two-leg security [=unicast OSCORE or DTLS on 1st leg, with group OSCORE on 2nd leg] Reverse proxy with e2e security [=Group oscore e2e via proxy] Reverse proxy with two-leg security [=unicast OSCORE or DTLS on 1st leg, with group OSCORE on 2nd leg] Currently the draft doesn't say anything about these combinations, which are valid, which not, and where to find the security solution to them (e.g. a pointer to draft-groupcomm-proxy for covering most of these cases) This text/analysis could be a new Section 5.3 \"Proxy Security\" or so. Note: cases 2 and 4 depend on a 100% trusted proxy so no e2e security model. The client in this case is not even aware necessarily of the existence of Group-OSCORE. The proxy stores the \"group member identity\" to be used to make the Group OSCORE protected request. This may be a single identity, or a separate identity for each authorized user of the Proxy service. Case 1 and 3 most likely means OSCORE-in-OSCORE is needed?\nIt seems like a good idea. Intermediaries are usually considered non-trusted, hence the interest for an actual e2e security model between origin client and origin servers. How would the proxy store and use a \"group member identity\" ? Even if the proxy somehow obtains multiple different Sender IDs from the Group Manager (i.e., one for each authorized user of the proxy service), the proxy would have to sign a Group OSCORE protected request with a private key of its own (possibly one per corresponding Sender ID), which is not in control of the actual origin client. With such a level of trust on the proxy, and with a client not interested in e2e security anyway, it would be just easier to think of the proxy simply as another single member of the same OSCORE group including the servers, thus sacrificing verifiable data authorship/origin. Maybe this might be useful/applicable for case 4, with a client anyway likely unaware of what happens beyond the reverse-proxy and to be actually reaching a group of servers. Not sure about case 2, where the client would be aware of an actual group of servers it tries to reach. Yes, when exactly OSCORE is also conveniently used between Client and Proxy, to fulfill REQ2 from URL : REQ2. The CoAP proxy MUST identify a client sending a CoAP group request, in order to verify whether the client is allowed-listed to do so. For example, this can rely on one of the following ... Note that this requirement applies even in case end-to-end security is not enforced between client and servers.\nAgree, in the two-legs security case it seems sufficient that the proxy has one \"identity\" within the group. And I agree that this proxy would most likely be a reverse proxy Case 4 i.e. it operates like a (trusted!) origin server. By definition of a reverse proxy , it has to be trusted just like a client trusts an origin server with being able to know and manipulate the served content.\nThis issue was addressed in Section 5.3 of version -04. NAME , can we close it?\nYes!\nLooks good! See inline some proposed rephrasing."} {"_id":"q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad","text":"…ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only."} {"_id":"q-en-ietf-rats-wg-architecture-2f51421d37ac9fd3d8e0b327303ee7b9c5733c4b282d2811fa976fc6b4810a15","text":"When comparing the term 'Appraisal Policy for Evidence' and 'Appraisal Policy for Attestation Result', they are both saying 'evaluates the validity of information about an Attester'. This makes me feel Verifier and Relying Party are doing the same things and they are redundant. I think it's more appropriate to say that the Relying Party evaluates the evaluation results generated by the Verifiers."} {"_id":"q-en-ietf-rats-wg-architecture-7d9b1eff3a1eed491407ceb8225d61f9ed12a7e078e3f4d343c921adefa234f9","text":"TLS authentication is sufficient for establishing trust in the various entities before sending them privacy sensitive data. TLS is also widely supported today and attestation between servers is not. Most likely TLS will be the mechanism. This PR adds mention to in addition to attestation.\nAddresses\nsays that the way for an Endorser to trust and Verifier is to use attestation. While I won't categorically say this should never be done, it seems like this is overly complex and a somewhat misuse of attestation. I think of the Verifier as a service like a web site, not an implementation like a TEE, so TLS or similar, not attestations, seems like the more efficient and conventional way to do this. I think attestation is associated with actual HW and SW, for example the Dell server, Intel CPU and Web stack used to run the service. The operator of a Verifier gets to change those around without having to tell the people using the service. On the other hand, if a Verification Service changes owners, say from Chinese to American, then Endorser concerned about confidential stuff in the Endorsements really wants to know. I also think that Verifier Owner should not be lumped in with Endorser. The Verifier Owner is very likely to be tightly coupled organizationally with the Verifier, where the Endorser is not likely to be.\nPR addresses this well enough.\nReviewing the trust relationships described in the architecture document got me thinking about sharpening a distinction between device attestation (e.g., EAT for phone) and service authenticity (e.g. HTTPS for a web site): Device Attestation Focus is on the implementation, the HW and SW The legal entity of interest is the device manufacturer The claims are about the HW and the SW, not the device owner Example: a mobile phone (not its owner/user) Example: integrity of a remote device Service Authenticity Focus is on the provider of the service, not the HW or SW The legal entity of interest is the service provider There is no equivalent of claims, but if there was they would be about the business or person operating the service Example: a web site Example: an email provider (IMAP service) Service providers (e.g., web sites) use whatever HW and SW they want to use. As a user of the service we don’t care what HW and SW they select. We trust the service provider to pick good HW and SW and to configure and operate it correctly. They might change the HW and the SW and as a user we won’t know. For example we trust Bank of Xxx to operate their banking web site correctly with little concern for their HW. This is why attestation of the HW and SW is not so useful to the user of a service. Users of mobile phones and IoT devices are not trustworthy to pick secure HW and SW. They are not professionals like the people running web sites. There is little accountability against them if they pick bad SW and HW. Therefor we want attestation for these usage scenarios. This is why EAT, FIDO and Android Attestation exist. I don’t think the line between device attestation and service authenticity is completely black and white. In some use cases attestation may be bent to provide some service authenticity. But I don’t think it makes sense to add attestation to the millions of web sites out there, add support for it to browsers and make attestation an extra layer of security for web browsing. I think it is fine for a service provider to use attestation inside their service to monitor integrity and such of the HW and SW providing the service. But usually that won’t be manifest outside the service. It could be awkward for the user to the service to have to re evaluate the service every time the service provider changes HW. Another way to look at this is from a certification / regulatory view. If the thing you're looking at is a candidate for implementation certification such as Common Criteria, FIPS, FIDO certification, then it probably lines up with device attestation. On the other hand, if the thing your are looking at lines up with regulation (e.g., banking regulation, GDPR,...) and to some degree OWASP, then it probably lines up with service authenticity. I think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like “attestation is not a replacement or augmentation for HTTPS/TLS\"\nIn the last editors session, the editor's rough consensus was that https does not provide the authenticity but it provides a secure channel. Instead, authenticity is checked via successful authentication rooted in identity documents (I hope I captured appropriately, please bash). Taking that into account, how does this effect this issue? Looking at the last paragraph, \"Remote attestation procedures are not a replacement for secure authentication, they are based on secure authentication.\" might be missing vital parts. Does anything need to be added or modified here?\nNote that is INCORRECTLY referenced here. Is the resolution of a different issue about HTTPS vs Attestation.\nI think comments from NAME do need to be taken into account when the text to address this issue is written. My text in opening the issue is not intended as the text to resolve the issue. Probably the text that resolves the issue needs to refer to \"HTTPS plus the web certificate infrastructure that includes public CA's and browsers and/or private CA's used in B-to-B...\" or such.\nI think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like “attestation is not a replacement or augmentation for HTTPS/TLS\" The design team disagrees with this. Every authentication (including HTTPS) can be said in terms of attestation. It is the verifier that allows the decoupling of the RP to know the expected state of the Attester."} {"_id":"q-en-ietf-rats-wg-architecture-b24380d2825743cfca6ade79b59e617fe3b7e4f3509de14b44e79b9c5e27bd98","text":"Added clarifying text -\n\" including roots of trust\" from .\nWe should add the sentence \"Normally, claims are not self-asserted by a layer or root of trust.\" in the section describing layered attester / layering. around line 1120?\nI'd not object that. I'd be okay with \"Normally, claims are not self-asserted by a root of trust.\""} {"_id":"q-en-ietf-rats-wg-architecture-cb60b699a038dfe2abe438043f079f925790353470efa1e0aba4b27bf2778abc","text":"Reworded 2nd and 3rd paragraphs\n` Is there a different way to say this? I assume in Passport, we’re not relying on the Attester to report itself as untrustworthy? I think it’s more that it doesn’t get a passport, so it has nothing to hand the dude at the immigration desk? i.e. untrustworthiness is indicated by lack of a passport?\nThe paragraph should begin with \"An Attestation Result that indicates ...\" The Relying Party is always the entity / role that consumes the Attestation Result. The text that describes the Attester (in the passport model) technically violates the role interaction model. Instead, the refusal on behalf of an Attester (acting as a proxy for the Attestation Result) to convey the AR is a protocol or interaction break down scenario. And yes it is correct for the Relying Party to, by default, assume that lack of receipt of an Attestation Result means the Attester is presumed to be non-compliant. If the paragraph stated this simply up front then we can avoid dove tailing it into the description of what constitutes non-compliance behavior. For example, paragraph 2 could start differently: \"By default the Relying Party assumes the Attester does not comply with Verifier appraisals. Upon receipt of an authentic Attestation Result and given the Appraisal Policy for Relying Party is satisfied. The Attester's state of compliance changes to be in compliance and actions consistent with compliance may be applied. If the Appraisal Policy for Relying Party is not satisfied, actions consistent with non-compliance may be applied. For example, the Attester may require remediation and re-configuration or simply may be denied further access.\" The third paragraph seems redundant to this (and somewhat wordy)."} {"_id":"q-en-ietf-rats-wg-architecture-aedc2640bfc031819576d4c01d2d05c9c2e2bd0a47d32cabc8a3f6d3723a8330","text":"rebased.\n` The first sentence seems to say ‘you can do whatever you want’, but the second seems to say “you should sign the messages” This doesn’t seem to be about integrity; it says Evidence must be confidential. Is that actually a requirement?\nI don't see where it says RATS messages have to be confidential. 'end-to-end' means between RATS roles. Which can be achieved at different nesting levels; where nesting is interpreted to mean nesting in the message (data) layers. Summary - I don't see a problem with the proposed text.\nI am inclined to agree with Ned here, but we could change the first line, while we're at it. The reuse of the word \"protected\" used in the 1st sentence in the 2nd sentence now might make it more clear that this message protection can happen on many layers.\nOk but this PR merges into the branch instead of master. Recommend rebasing on master so that order of merge doesn't matter."} {"_id":"q-en-ietf-rats-wg-architecture-6ffd51b60b91a195cf808e889182417570bb60b2d763cb91e580956c36f4ddce","text":"** Section 12.1.2.1. Would it be more precise to say: OLD The degree of protection afforded to this key material can vary by device, based upon considerations as to a cost/benefit evaluation of the intended function of the device. NEW The degree of protection afforded to this key material can vary by the intended function of the device and the specific practices of the device manufacturer or integrator."} {"_id":"q-en-ietf-rats-wg-architecture-17c3f0d4ddfbbe84a756fa49e1fc60dc79bd3152365183fdb800229b7cc1af95","text":"Throughout the document the term \"trust anchor\" is meant in the to be a public key and associated (meta)data. This definition does not cover attestation schemes based on symmetric crypto, e.g., , and Arm PSA, and should be extended to also cover such cases.\nWas there specific text that you believe needs to be changed?\nyes, we identified at least two places: §7.1 and §12.4.\nReads great to me. Any other opinions?"} {"_id":"q-en-ietf-rats-wg-architecture-4a1bd51d1e98380ffd26e825c367ceac19d2cf5b376079599f3050b1b5467a99","text":"Add in text from Birkholz draft, with some wordsmithing and grammatical fixes from Dave Thaler"} {"_id":"q-en-ietf-rats-wg-architecture-009d1a84dc2570d08a9cf20d3f96eafbd32136be2c4cd3cb62b06eea75bd01e3","text":"Here's a first attempt to combine terminology sections, and trust model. Includes new terms Endorser and Appraisal Policy per recent discussion. Signed-off-by: Dave Thaler\nDone\nDone. (But if you have feedback on a sentence like this in the future, please attach the comment directly to the code line under the \"Files changed\" tab, to make it easy to find which phrase you're referring to.)\nDone\nUpdated. How about now?\nI would like to discuss the possibility of combining the terms \"Endorsement\" and \"Attestation Result\". Just like we have an Appraisal Policy for a verifier and an Appraisal Policy for a relying party, some might view an Attestation Result as an Endorsement for use by the relying party. I'm ok either way, but wanted to discuss this in our next call to see if others think it would help or confuse things.\nI still believe my original definition of Attestation is good, I haven't seen any comments here about what the problem was. For comparison, URL says it's \"The process of validating the integrity of a computing device such as a server needed for trusted computing.\". I'm also ok with that definition.\nI’m OK with that wording for introducing the concept. It may break down if the terminology for ‘device’ and ‘trusted computing’ become overly specified / specific.\nThe word 'health' is generally being used where other words such as 'trustworthiness', or 'state' could be used. I'm concerned that 'health' might not easily be synonymous with 'trust'. It also seems to bias the architecture toward NEA (RFC5209) which IMO does not define attestation."} {"_id":"q-en-ietf-rats-wg-architecture-972315f0f5f8ed95a660303761574dd2ad4f6c8063e4498414aa26ca42682a79","text":"possible text closing .\nThe architecture draft should say that known-good reference values can be either a part of an endorsement or part of an appraisal policy or both.\nI would only want to mention \"known-good reference values\" at all if it comes with an explanation of why they're not sufficient in the real world. Not sure whether we want the arch doc to go there. We can discuss in a meeting.\nI wouldn't say they are never sufficient. I would say KGVs serve a specific purpose. And this purpose is documenting the successful loading of well-known software on well-known Attester equipment, where the Attester equipment supports the signing of extended/hashed measurements. Adding Guy/Ned as they probably have better words for this based on their work with TCG & DICE. Eric\nHi Eric, Dave, I'd just add one more restriction to the list... if \"known good values\" means \"if the measured hash in a PCR is exactly this value then we're good\", the domain has to be narrowed to single-threaded well-known software instances, e.g., the components of a BIOS. The complexity of known-good-values comes with asynchronous threads such as IMA in Linux; in that case, the hash in the TPM PCR can only serve as a check on the (perhaps huge) number of individual hashes in the log file, each of which needs its own Known Good Value. Hence the SWID tags. I'm certainly not qualified to say how all this would work in EAT- or TEEP-land... /guy Juniper Business Use Only\nGoing back to Laurence’s original request. The observation he is making is that both the Endorser and the Appraisal Policy for Evidence Owner entities can say what is a ‘reference’ or ‘known good’ value. Presumably, an Owner entity did some due diligence to ensure the KGV is indeed ‘good’ given Owners may not have access to source code or other tools that the original vendor has access to. But ultimately, all appraisal decisions are qualified by the Owner’s wishes. There is complexity in ecosystems when a vendor (e.g. BIOS vendor) doesn’t give up ownership of the FW even though the computer is sold to the customer. The PC owner might have a policy that allows the BIOS vendor to supply an Appraisal policy for just the BIOS. The Roles architecture for the BIOS component may differ from the Roles architecture for other components. In this case, the Endorsement and Policy roles may be supplied by the same Entity (i.e. the bios vendor). See more inline below - Nms>\nThe architecture doc should (must?) facilitate a good understanding of attestation end-end for the reader. K-g-v seem like a very core concept to attestation. Leaving them out of the architecture document will leave the reader scratching their head, like I was scratching mine. I don’t think it has to be a detailed, concise or definitional explanation. Something like this: For some claims the Verifier will compare the claims in the Evidence to known-good-values or reference values. In many cases these are hashes of files, SW or memory regions. In other cases they might be expected SW or HW versions. In other cases, they may be something else. The actual data format and semantics of a known-good-value are specific to claims and implementations. There is no general purpose format for them or general means for comparison defined in this architecture document. These known-good-values may be conveyed to the Verifier as part of an Endorsement or as part of Appraisal Policy or both as these are the two input paths to the Verifier. LL\nI agree that the concept Laurence outlines should be in the architecture doc. I think there are words to that effect currently. For example, section 7.2 says “devices from a given manufacturer have information matching a set of known-good reference values,” and “Thus, an Endorsement may be helpful information in authenticating information about a device, but is not necessarily sufficient to authorize access to resources which may need device-specific information such as a public key for the device” Admittedly, this is scant and leaves a lot to the reader to figure out or make assumption. The architecture could more clearly capture the idea that Endorsements are “reference” claims that are compared with Evidence claims to validate their correctness. It should introduce other common terms such as ‘known good values’, ‘reference values’ and ‘endorsed values’ as roughly synonymous. I think it should go further to say that Endorsements can include values that are static (intrinsic) that don’t require attestation, but can be asserted about a class of device, platform, entity and appraised according to an appraisal policy for Evidence (Even though the claims don’t appear in Evidence). While I think it is possible to interpret the existing wording according to the above understanding, it is also possible to interpret it differently. Therefore, the section should be improved. I also think the first sentence (and definition in terminology) are insufficient. While “An Endorsement is a secure statement that some entity (e.g., a manufacturer) vouches for the integrity of the device’s signing capability.” – is correct, it is insufficient because it is also a statement about the devices Attesting Environment ability to collect claims securely.\nGiven that this Architecture document will serve as a guide to additional work that is needed from RATS wg to define data schema, interfaces for an interoperable implementations of verifiers with endorsements from multiple vendors to be compared with evidence from different implementation of attesters, isn't it worthwhile creating a hook in this document for scope of work needed to tackle it in future documents? If yes this document should define KGV, the types of KGVs that can be endorsed or defined in attestation policy, a need for data schema for each type of KGV and a set of interfaces to fetch/publish it deferred to a future document. Similar to claims encoding formats specified in section 8, a specification needed but deferred to future document for KGV endorsement formats would be useful.\nA google search for \"known good values\" yields some mention in the Cisco trust center and a few books on security. Linux IMA mentions \"good measurements\" once that I could find. So there's some use of the term out there and it seems worth mentioning along with RIM. It is also a very descriptive term that is probably immediately clear to the user."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-2e51b5b3e278bbae2391573b25fddba48d027ed1a2d2d49b46ee4bf01036e21e","text":"Generated from . cc NAME NAME NAME\nThese test vectors should cover the whole flow of the protocol, starting with the input and producing a output."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-8a9c1527aa53418c29d06d6463b812b7aa2ca28c3096dbde48f5d30e0ea9d0fa","text":"Spell out origin steps more clearly Split token caching into its own section I tried having an overall \"origin\" and \"client\" section, but it didn't flow as well."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-4d467baae1e9aad24ea2b67e7d1fcd88cb660a542cd3e808b5e60b9a4d16f6e3","text":"The section references seem to be breaking the build, so I'll fix that later before we land this.\nOrigins currently choose whether their challenges are per-origin or cross-origin. For cross-origin, double-spend prevention is only as good as the coordination between origins and the Issuer. Per-origin allows double-spend prevention to be isolated to a single origin; also prevents the cache of tokens being take up by some other origin.\nWrite this as \"any origins that share a redemption context must also share double spend state for that context.\""} {"_id":"q-en-ietf-wg-privacypass-base-drafts-efee838b1b0c349b313e2aa813a3ab8384d4f8466092f5bcd650471b807f5f96","text":"The publicly verifiable basic token issuance protocol doesn't include an origin name that the issuer can see, so it does allow origins to be able to use that issuer (as long as the know the key) without explicit coordination or permission from the issuer. This isn't necessarily a problem, but we should likely mention it."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-29354ad285db797bf037e78b529056a9ae776a5d487df48f2efbdbf86cd3ef7c","text":"This doesn't go overboard and clarify non-collusion assumptions for all models. It simply states the requirement that entities with access to the attestation context and redemption context do not collude, and then generalizes somewhat for simplicity. cc NAME NAME\nThe architecture assumes that any two parties do not collude with each other. I think this is too strong an assumption. In particular, if the Origin does not collude with anyone else, what is the point of unlinkability of the issuance and redemption flows? Looking at the use cases, it seems that where the Attester and Issuer are distinct there is a needed non-collusion assumption between them to achieve desirable privacy, but I think the origin colluding with one of the Issuer or Attester should be fine and would be worth considering in scope. Also, in addition to the non-collusion assumption, are there any other assumptions made on the parties, such as honest-but-curious, or do we consider them to be potentially fully malicious? It would be nice to spell this out.\nIf the Origin colluded with the Issuer it would learn the client's identity (presumably), which would be bad. Ultimately, the desired property is that any sort of collusion that would allow the redemption context to be linked to the attestation context is prohibited. I'll try to clarify."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-a586d338b1bbb9a4c25346d622ac021f0e08067a9465b55c1c971807e5a10339","text":"I also moved from the message namespace to the application namespace. The difference is helpfully summarized .\nI see in the current draft that they register and . Since \"token\" is a very broad term, I'm wondering if something slightly more specific to Privacy Pass such as and might be reasonable?\nIf we make it more specific, it should be \"private\" instead of \"privacy\", to be \"private-token\" and align with the auth scheme. There's a fair amount of deployment on these types today, so this would be a relatively large change, or servers would need to tolerate both versions.\nI'm in favor of making this change and having issuers support both for a little while.\nCool, that would work. NAME want to make the proposed update?\nWill do!\nNAME PR's up!\nLooks good!"} {"_id":"q-en-ietf-wg-privacypass-base-drafts-7f8afe82a7fb8d987588e503593c13d489643015e6aeff3dc2458f3f6a8a945c","text":"The document refers to authentication scheme attributes throughout; the term from HTTP is 'parameters'. Also, both the challenge and credentials definitions should say what to do with unrecognised parameters."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-ad82381f1ae483825adfe901c94f59341d60bbb77aaf59156616c4a104ae928f","text":"These functions do not need the \"PP_\" namespace prefix as it's implied from the specification. I also removed underscores in favor of normal camel case notation."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-ccf97fae925e598a4abca5bc64a8ea0077fcc4e717ae1a70fd5e204f663630ef","text":"cc NAME NAME for an extra set of eyes to check me here -- ASN.1 is not my forte. I went based on the implementation , which I think matches what Frank did , but I would definitely appreciate confirmation!\nFriendly bump NAME and NAME =)\nDo you have a test vector / example to make sure it matches what I believe?\nNAME yeah, each includes the encoded public key ().\nExample, taken from the current test vector:\nLooks good to me. But yeah, ASN.1 is not the most concise nor readable format.\nASN1 matches my expectations as well.\n... we should make it clear, and provide test vectors to make sure it matches. The test vector could include public keys in PEM format and then print out the expected key ID."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-fb9866f6b403698bd74e101e35b0f28392d78ea6487d81afdd1298208a4a066d","text":"Also update URL with correct links\nLooks reasonable. Once this lands we should see about hooking about the gh-pages generation and link to that."} {"_id":"q-en-jsep-61c069945f13de88cca302ae81a02e6349940706573c9a76adb934e16597d53f","text":"I cleaned up the language.\nlgtm\nCullen wants LS groups because that is what legacy gear understands. Magnus wants to use MSID, which carries this.\nDiscussed in DC: will test out the LS group concept, Jonathan will send text to the list and we will discuss. Concerns are that there is a potential for mismatch between what is signaled in LS groups and what is signaled in MSID. We also need to work out what to do in the absence of LS groups.\nbug 32 is prob dup of this\npinged Lennox\nGiven the discussion on CNAME, no one in the room at IETF90 objected to the idea that LS group was MTI.\nPlan here is update draft to say that when creating the SDP, everything in PC has same CNAME, and the things in the same MediaStream get put in a LS (lipsync) group. This will mean legacy stuff that support LS will do the right thing, and legacy stuff that does not, will simply try and sync everything.\nCullen to ping Harald to get his LG regarding the comment above (ie. regarding its interaction with MSID)\nJust to add some clarity here .... this is what I think we decided. All the RTP in a given PeerConection will use the same CNAME. For each stream, If a stream has more than one track, then the JSEP needs to create a lip sync group (see URL ) to put all the tracks in a given stream into the same lip sync group.\nYes. If no lipsync group from the remote side, everything is synchronized. If only some tracks in LS group, the rest are unsynchronized. (Basically, if you indicate you understand LS, we take what you give us, all synced)\nMT, would you be willing to write some text here?\nWho has the token here?\nJust realized there could be a problem. Consider A, who does not want to sync its audio/video, and B, who wants its video to be synced. A offers a=group:LS with no mids. B has to reply with a subset of the proposed mids, by 5888 rules. Therefore B's a/v won't be synced. I don't see a good reason to require this, but 5888 is pretty clear. The identification-tags contained in this \"a=group\" line MUST be the same as those received in the offer, or a subset of them (zero identification-tags is a valid subset). When the identification-tags in the answer are a subset, the \"group\" value to be used in the session MUST be the one present in the answer.\nThere's an issue with LS when using bidirectional media sections. If media stream X (from A to B) uses media sections 1 and 2, and media stream Y (from B to A) uses media sections 2 and 3, what should be the lipsync groups? Are lipsync groups unidirectional or bidirectional?\nsee comment above. 5888 says bidi, but that is crazy talk.\nIndeed. Crazy. What is our plan of attack here then? Can we ask MMUSIC to fix this? Either by un-screwing-up a=group (which seems unlikely), by making a new grouping semantic, or by telling us that there is a better option.\nJSEP reserves the right to contradict existing standards where necessary. Generally, it seems like the need for symmetry in a=group is semantic-dependent, and so fixing it later in 5888 seems possible.\nSo you are suggesting that we define a=group:LS as declarative: it applies to the media that is sent by the peer that includes the grouping. I can live with that, though I suspect it will cause considerable heartburn. It certainly complicates the grouping model considerably.\nyes, essentially. I don't see any way to reconcile LS with reality otherwise.\nChanging the semantics of existing mechanisms by declaring that they are changed without actually changing them is a dangerous process. A simpler alternative is to drop media section reuse when LS groups are present: If a particular media section is a member of an LS group, or if a MediaStream requires its tracks to be part of an LS group, don't use it for a track in the opposite direction. The legacy case of one audio section, one video section that is used in both directions will then require that we somehow turn off LS group generation."} {"_id":"q-en-jsep-9b7ce3cd8b357642a35eda2b742807c4fbad1aabe8eb97f3afd1ff3b353fbd69","text":"This LGTM. Actually, I thought I had written something like this :)\nLGTM\nAlways remember to update both the offer and answer sections :)"} {"_id":"q-en-jsep-660019e992075f6eb93447308ce4978d990c0c1928a50392bd11a01d8f9ad709","text":"LGTM\nlgtm\nThe question has come up about what the default candidate should be set to during trickle ICE but after at least one candidate has been gathered so at least in principle you could set c= to one of those candidates. See also; URL\nWant to pick a solution that works even if we we do continous trickle and ice does not finish.\nConsensus on today's call is: Before ICE is completed, follow the 5245 guidance and pick the \"best\" candidate. Once ICE is completed, pick whatever you are actually using.\nIf we move to a mode where ICE never completes, perhaps use the active pair.\nIt turns out that this was pretty clear already. I have made it slightly clearer."} {"_id":"q-en-jsep-a6e4c08963a21919074f2d18f95f8fb8187b923325deb222049426dbbe44cd20","text":"and\nI tweaked this a bit, nothing major. LGTM.\nFor CT without AS, probably want to just tell the congestion control to stay below CT. For CT where AS is specified, do the same, but limit any streams with AS to the indicated value (in addition to total limit).\nThis should go into the handling-a-remote-description section."} {"_id":"q-en-jsep-eb458cd12f16843cef609badd6cdb895d18e4070c708cc066109b1cf6a561f77","text":"From meeting minutes \"SDP o= line increment [] While the proposed solution on the slide was accepted, it was noted that the proposal on the slide goes against what was agreed at last meeting. Emil Ivov also commented that Trickle ICE for SIP does exactly what is described on the slide. Jonathan Lennox said that we should make sure this is compatible with whatever is done in trickle ICE SIP stacks.\"\nThis algorithm becomes correct once we decide to not allow o= modification. Editorial comments follow.\nDiscussion in-room: 1) add example to make it clear that there is a separate counter (other than current o=) which is what increments. 2) change text about <session-version> to be more of a \"note that\" for an edge case, as opposed to normative text about a mainline case.\nUpdated this branch. PTAL\nLGTM\n2349: \"might differ\" is creepy. \"differs\" perhaps. 2359: \"N + 2\" -> must be greater than N + 1 This is basically OK modulo the above"} {"_id":"q-en-jsep-929e562d796423f27b6f9139254fea6460fce7403739535ab656d2faa47be586","text":"NAME NAME PTAL\nFrom Berlin: \"A clarification to that rule was found when an offer is received with some m= lines bundled and some not; you accept all m= lines that are possible to bundle with the first m= line. \"\nI think we'll also have to update section 5.3.1 (Initial Answers) where it lists conditions for rejecting an m= section. Also, do we need to make similar modifications for the \"balanced\" bundle policy? LG with nits"} {"_id":"q-en-jsep-360256d7752b72df660cf9806943cb09223bc7d8450f999244d95bb722c9a417","text":"Looks good to me\nJSEP says: S 4.1.1. \" The PeerConnection constructor allows the application to specify global parameters for the media session, such as the STUN/TURN servers and credentials to use when gathering candidates. The size of the ICE candidate pool can also be set, if desired; by default the candidate pool size is zero.\" The WebRTC spec says: \"At this point the ICE Agent does not know how many ICE components it needs (and hence the number of candidates to gather), but it can make a reasonable assumption such as 2. As the RTCPeerConnection object gets more information, the ICE Agent can adjust the number of components\"\nImagine 4 is more realistic.\nso default in not defined but their is way for the JS to to set the default for a PC\nDiscussed in DC: default will be unspecified and left up to browsers, there will need to be a setting for applications to override this default.\nw3c bug is URL"} {"_id":"q-en-jsep-d6e7df3189eaf130554b4510a94223c8c4f477eaf3f24e32a1555d1832f97fad","text":"This distinguishes between the direction which was last set via setDirection (which simply affects offer/answer generation) from the direction that was negotiated, which affects whether a transceiver can send/recv media. Addresses issue .\nAccording to WebRTC 1.0 Section 5.4 setDirection() does not take effect immediately: \"The setDirection method sets the direction of the RTCRtpTransceiver. Calls to setDirection() do not take effect immediately. Instead, future calls to createOffer and createAnswer mark the corresponding media description as sendrecv, sendonly, recvonly or inactive as defined in [JSEP] (section 5.2.2. and section 5.3.2.). Calling setDirection() sets the negotiation-needed flag.\" My interpretation of the above is that setDirection immediately sets the transceiver.direction attribute, but that the \"current direction\" (distinct from the direction attribute) is only changed by calls to setLocal/setRemoteDescription. I would like to suggest that JSEP indicate precisely when the \"current direction\" changes. My reading of JSEP is that this is determined from the currentLocalDescription and currentRemoteDescription. For example: The direction provided in the currentLocalDescription (e.g. the \"current direction\") is \"sendrecv\" and setDirection(\"recvonly\") is called. Since setDirection does not change the direction in currentLocalDescription, the RtpSender does not stop sending. createOffer() is called. A \"recvonly\" offer is generated. setLocalDescription(RecvOnlyOffer) is called. Now the direction in pendingLocalDescription changes to \"recvonly\", but the direction in currentLocalDescription is still \"sendrecv\", so the RtpSender does not stop sending. setRemoteDescription(SendOnlyAnswer) is called. Now the direction in currentLocalDescription is \"recvonly\", so the RtpSender will stop sending.\nBernard's description makes sense to me; currentLocal/Remote indicate the current direction state.\nFixed by"} {"_id":"q-en-jsep-14e7394713837b26bd8bf50e8af0f6e1e5de6cb2afafa56dbaabec2ef0c6d23f","text":"Originally raised by Magnus: Section 3.7: JSEP supports simulcast of a MediaStreamTrack, where multiple encodings of the source media can be transmitted within the context of a single m= section. I think this should be explcit that it is \"simulcast transmission of a MediaStreamTrack\"."} {"_id":"q-en-jsep-eb77a8f9ec37539a3644d8750c8141ed47f1af5e7bdab618eaa6525ce0e1f3d1","text":"Originally raised by Magnus: Section 5.1.1: R-2 [RFC5764] MUST be supported for signaling the UDP/TLS/RTP/SAVPF [RFC5764], TCP/DTLS/RTP/SAVPF [I-D.nandakumar-mmusic-proto-iana-registration], \"UDP/DTLS/ SCTP\" [I-D.ietf-mmusic-sctp-sdp], and \"TCP/DTLS/SCTP\" [I-D.ietf-mmusic-sctp-sdp] RTP profiles. I would note that [I-D.nandakumar-mmusic-proto-iana-registration] is actually published as RFC 7850. Section 5.1.1: R-12 [RFC5761] MUST be implemented to signal multiplexing of RTP and RTCP. This should include the updated as ref also."} {"_id":"q-en-jsep-647be66497dcae551dcdff54c4054b548a4642cc4453f6844dbcb613f0abc811","text":"Originally raised by Magnus: Section 5.2.1: o Session Information (\"i=\"), URI (\"u=\"), Email Address (\"e=\"), Phone Number (\"p=\"), Bandwidth (\"b=\"), Repeat Times (\"r=\"), and Time Zones (\"z=\") lines are not useful in this context and SHOULD NOT be included. Considering that b=CT is not useless, I am a bit surprised to see b= on this list. Yes, b=CT: is only useful is one like to provide a upper total bandwidth limitation. So, maybe some discussion of the possible choice?\nThis is just a mistake in 5.2.1 - we cover how to deal with it later in section 5. Prob need to remove the mention of b= here."} {"_id":"q-en-jsep-fd2422f9dc124c3e9cceb884ccca4591302bcbf53f78f879a2bd59e5bcfd6eba","text":"Did we not decide that modifying that offer (or an answer created by createAnswer()) is not allowed?\nAgreed. This text should be culled."} {"_id":"q-en-jsep-c91b36cb24d7ff6dd0b037f1aea8cd9f3c1e4a301f6967ef26d4fe644da12c18","text":"The pool is only used in the first offer/answer exchange. After applying a local description, the pool size is frozen and changing the ICE servers will not result in new candidates being pooled. After an answer is applied, any leftover candidates are discarded. This only happens when applying an answer so that candidates aren't prematurely discarded for use cases that involve pranswer, or multiple offers.\nI can live with this as is. Want to read again if we end up making changes to address EKR comments\nUpdated PR and its description based on our discussion."} {"_id":"q-en-jsep-0be0fddfbe3ed302a56a6c624aa27b0a8a18f715edb4d50bb438c206eb116a21","text":"I applied the following principles, which unfortunately are somewhat inconsistent with the current mux-exclusive draft. You use a=rtcp-mux to negotiate mux a=rtcp-mux-only is used by the offerer to say \"I really mean it\" To that end, offers contain a=rtcp-mux-only if the policy is require. Answers don't contain it because you don't need it to negotiate mux.\nThose principles are consistent with bundle-only, although I wrote the mux-only examples as if mux-only should always be included whenever a=rtcp-mux is present.\nNAME back at you\nNAME ping\nLG"} {"_id":"q-en-jsep-69fef9df1c3d94271fda0bf5e239ddd2f08df5779d5db6209d414c016bcc57c3","text":"attributes. As far as I can tell this can happen: On re-offers On any answer because nothing requires the offerer to generate bundle tags in the same order as m= lines (even though the data channel m= line is last) See: URL\nNAME PTAL\nSay you have a session where only a data channel is created. SDP looks like [session boilerplate] a=group:BUNDLE d1 m=X application UDP/DTLS/SCTP webrtc-datachannel a=mid:d1 ... Now, a media m= section is added (i.e. after first o/a). SDP becomes: [session boilerplate] a=group:BUNDLE d1 a1 m=X application UDP/DTLS/SCTP webrtc-datachannel a=mid:d1 a=ice-ufrag:ABCD ... m=X audio UDP/TLS/RTP/SAVPF 0 a=mid:a1 [no ice-ufrag or TRANSPORT attrs, since bundled] Now, where does a=rtcp-mux go? It can't go into because of the language that states Thoughts on how to resolve?\nCould generating a session description be split into two parts? First, generate media-level attributes for each \"m=\" section, then transport-level attributes for the BUNDLE group and each non-bundled \"m=\" section.\nThat would be my preference, i.e., jam it into the BUNDLE group m= section, regardless of media type, as that's conceptually simplest, and easily handled here in JSEP. Either way, I suspect this will require changes upstream in the bundle spec.\nSo ... is it really not allowed in the d1 section. Unknown attributes are allowed in the d1 section, they are just ignored if they have no meaning for the data channel. Agree this sounds like something that will hit bundle.\nCouldn't you do a=group:BUNDLE a1 d1 I would have thought this was prohibited, but is it really? URL\nI think that's legal, but that approach doesn't work in the general case, where \"m=\" section A has TRANSPORT attributes that only it uses, and section B has TRANSPORT attributes that only it uses. BUNDLE just needs to allow an \"m=\" section to contain attributes that it normally wouldn't contain if it wasn't bundled. (If this isn't already allowed).\nNAME any action needed here?\nI think what we want in this scenario is to put \"a=rtcp-mux\" and \"a=rtcp-rsize\" in the data channel section. The simplest way would be to say: Which I guess could only happen in subsequent offers.\nLet's go with what NAME suggests, although this makes it a bundle problem.\nWhat would you say is needed on the BUNDLE side? Just clarification that the offerer BUNDLE-tag m= section should contain the union of all TRANSPORT and IDENTICAL attributes used by the BUNDLE group?\ntalked to chairs and advice from chairs on this and similar things that depend on mmusic outcomes. We put in JSEP now what our best guess is of how mmusic will end up and publish JSEP. At the same time we log a bug to check if we need to make any changes to line up with what mmusic decides. We can take changes to line up with mmusic as Last Call comments later on.\n+1 with Taylor proposal to put RTP related things in data m= section if it is first and good with making this be bundles problem\nNAME NAME I assume that you think that on initial offers this cannot happen because data comes last?\nNAME Right. I can't think of a scenario where it wouldn't go last in an initial offer.\nAgreed.\nOh, except that you aren't required to make the bundle group in order So you can have\nI don't think the question whether you are allowed or not to place an RTP attribute in a non-RTP m= section is a pure BUNDLE issue. One needs to check what the specs defining those attributes say.\nevery SDP spec has to accept (and ignore) lines it does not understand ...\nI looked into the a=rtcp-mux and a=rtcp-rsize specs, and I didn't see anything that prohibited their inclusion into a non-media section. Generally, I think BUNDLE can define the rules for this sort of thing, without fear that it will somehow contradict some other spec.\nNAME NAME You may want to chime in on the current MMUSIC thread. I agree with you, but it seems to be going in the direction of \"you can't do this\".\n\nI looked into the a=rtcp-mux and a=rtcp-rsize specs, and I didn't see anything that prohibited their >inclusion into a non-media section. It may not be explicitly stated, but since the semantics is about RTP and RTCP I think it is implicit. Having said that, I personally have no problem with allowing the attributes in non-RTP m- sections. I just want to verify that there is no text that has to be modified somewhere, and/or whether there is some text that has to be added somewhere :)\nResolved by\nLooks good."} {"_id":"q-en-jsep-ad3b2d3197b9351ccfd977d5a321b7a7c3bbf695b796deb5de660e2fa793f386","text":"LGTM and if Justin wants to tie in the stuff from my comment a bit more that is great\nNAME PTAL\nseems like it's generally right,LGTM"} {"_id":"q-en-jsep-217b9503eb1016a42f77fe85409b6272f5e6445df17a55d0e1e696c313e60dc1","text":"I agree with justin here. my bad for missing that.\nThe 4th bullet on page 64 is ambiguous at \"this m= section\". It looks like this bullet should just be merged with the previous bullet.\nI'm not sure what the problem is here, but I'm not against merging it. NAME\nThe bullet point in question is the one that reads If this m= section was present in the previous answer then the state of RTP/RTCP multiplexing MUST match what was previously negotiated.\nLGTM"} {"_id":"q-en-jsep-5d17daba5ca8f56217be631b3426167c704089dd1134f0911d37e4c9d1664f38","text":"Fixed\nalso tagging NAME (who might be unhappy with 'default' which matches Chrome behaviour but not Firefox behaviour (it generates a uuid)\nping NAME NAME\nNAME NAME PTAL - Barring any objections this one ought to get merged.\nFolks, I think this one is pretty uncontroversial - planning to land this next week if there are no objections.\nTake this simplified SDP offer from a remote endpoint (in parts taken from : A single audio and video stream. No a=mid (there is quite some text taking care of that) but no a=msid line either. URL shows a live example. Both Chrome (without unified-plan) and Firefox will fire the ontrack event (or legacy onaddstream) with a single mediastream containing an audio and video track. This has been like that since... 2012-ish? Unified plan support in Chrome changes this behaviour: URL ontrack is fired with an empty set of media streams. onaddstream is not fired. This is quite a change in behaviour. And unexpected, I didn't expect any issues for this trivial case of a single audio/video track. As NAME webrtc-pc assumes/requires a=msid to be present and . In particular this says which means firing with no streams is technically correct. JSEP seems to assume it is present as well, but has this text about a=msid in Note the \"If present\". What if it is not present? The closest I could find in terms of guidance was this from the and its section about handling a=msid is supposed to be present in initial offer so we are talking about dealing with non-webrtc entities send an offer. Given that there is plenty of support for entities that do not support a=mid (an RFC published in 2010) in JSEP do we need backward compability for the case of a single audio/video track each but no msid? Interop between browsers is probably not affected, due to Chrome still interpreting and sending a=ssrc:123 msid:...\nnote that a proper webrtc client will include \"no streams\" as (maybe with a track) which allows differentiating between \"no streams because I do not use any\" and \"msid???\" Thanks NAME for another\naside: the interesting case you have trouble distinguishing between is \"no msid because I haven't started sending a track yet\" and \"no msid because I don't support msid\". It's one case where we still have trouble if RTP packets arrive ahead of signalling. But - yes, I think we are better off sticking with the 2012 behavior, default stream and all. We can handle the case above by changing the stream associations of the receiver if a new SDP (with msid) arrives after the RTP packets.\nthere used to be a a=msid-semantic line that would allow determining whether a party wants its offer to be treated assuming msid support. Browsers still send it :-)\nHow are non-signaled tracks supposed to be handled anyway? There's JSEP language about creating a track for it, but what about webrtc-pc? (If a track exists in the forest, and nobody is around to surface it to JavaScript, does it still exist?) \"Run the following steps for each media description in description\" means no ontrack will fire until we do have something signaled. If the track (and/or stream) only exists in some inaccessible way they're almost conceptual until the signaling does arrive. And when it does arrive, we know whether or not MediaStreams were signaled. So we should know what to do, same things as if we didn't receive early RTP packets? Does the distinction matter? I love that we can tell the difference between \"no streams because I do not use any\" and \"no msid signaled\" whether or not \"a=msid:-\" was present. I had not thought about that! Unless I'm confused about something, I suggest: If is present, do what Chrome does: don't create any stream. If the line is completely missing, do what Firefox (or Chrome Plan B) does: create a MediaStream with a random id. Does that work for people?\nBy the way I thought this was a webrtc-pc issue when I replied above, so I was thinking about what to do in JavaScript rather than talking about any JSEP change.\nI suppose we need some JSEP language to handle the lack of any a=msid but i agree with what you propose above. We do have have a=msid:- and we need to handle that in webrtc-pc too. Will file an issue there.\nAll webrtc-pc says is \"let msids be a list of the MSIDs that the media description indicates transceiver.[[Receiver]].[[ReceiverTrack]] is to be associated with. Otherwise, let msids be an empty list\" And says \"The special value \"-\" indicates \"no MediaStream\".\" So that already takes care of the case correctly, I claim. I agree. We need to solve what the media description (JSEP) says about this. Right now : \"If present, \"a=msid\" attributes MUST be parsed as specified in [I-D.ietf-mmusic-msid], Section 3.2, and their values stored.\" I think it needs to say what happens when it is not present. Something like: \"If no a=msid is present, then in place of \"msid-id\", create a single \"id\" unique to this end-point, if one hasn't already been created for this connection.\"\n\"if one hasn't already been created for this connection\"; so future tracks will share the same stream? There is always at most a single \"default stream\"?\nNAME says he observed this is how Firefox and Chrome work. URL\nSee URL\nLGTMreason I asked is that chrome generates a MediaStream with id \"default\" -- see URL while Firefox does not. (I do not see any reason to change the legacy behaviour of chrome even though it seems a bit odd that the RNG always emits the right bytes for \"default\")"} {"_id":"q-en-jsep-bc4e685f31928193c12f43f0f238ca21d7cfb1a3b5cdfeb383b8a5e7e99e6f37","text":"Several small updates: Issue 15: local media state, local resources correct, removed comment. - Issue 16: updated wording to use 'calling this method', removed comments, - Issue 17: changed stopTransceiver to stop, removed comments, - Issue 18: changed 'directional attribute' to 'direction attribute', removed comments,\nLGTM once lands\n18) [rfced] Section 4.2.5: We see \"direction attribute\" but not \"directional attribute\" in RFC 3264. Will this text be clear to readers? (We ask because we also see \"A direction attribute, determined by applying the rules regarding the offered direction specified in [RFC3264], Section 6.1\" in Section 5.3.1 of this document.) Original: More specifically, it indicates the [RFC3264] directional attribute of the associated m= section in the last applied answer (including provisional answers), with \"send\" and \"recv\" directions reversed if it was a remote answer.\nThis should indeed be , as that's the terminology used by 3264.\nNAME please close this issue once you consider it resolved.\n17) [rfced] Section 4.2.2: Will \"stopTransceiver\" be clear to readers? We could not find this string elsewhere in this document, anywhere else in this cluster of documents, in any published RFC, or in google searches. Original: The stopped property indicates whether the transceiver has been stopped, either by a call to stopTransceiver or by applying an answer that rejects the associated m= section.\nThis should be , rather than .\nNAME please close this issue once you consider it resolved.\n16) [rfced] Section 4.1.16: Should \"The effects of this method call\" be \"The effects of calling this method,\" and should \"This call\" be \"Calling this method\"? Original: The effects of this method call depend on when it is invoked, and differ depending on which specific parameters are changed: ... This call may result in a change to the state of the ICE Agent.\nSGTM\nNAME please close this issue once you consider it resolved.\n15) [rfced] Section 4.1.10: Please confirm that \"local media state\" and \"local resources\" (as opposed to remote) are correct in the context of setRemoteDescription. (We ask because we see identical wording in Section 4.1.9 (\"setLocalDescription\").) Original: This API changes the local media state; among other things, it sets up local resources for sending and encoding media.\nyes, this is correct\nNAME please close this issue once you consider it resolved.\n12) [rfced] Section 4.1.8.1: We had trouble with this sentence. If neither suggestion below is correct, please clarify. Original: While this could also be done with a \"sendonly\" pranswer, followed by a \"sendrecv\" answer, the initial pranswer leaves the offer-answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer, if followed by a \"sendrecv\" answer the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer followed by a \"sendrecv\" answer, the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time.\nsuggestion seems right"} {"_id":"q-en-load-balancers-a6c63ae1a09494173177151658505ae8b53efd83bef243e22f2c26d2cf8af76e","text":"This is in response to a post NAME made on .\nThanks for the PR! So as an interoperability standard, if the retry service encodes \"4000\", what does the server do with that? Do we need to add a config parameter for the time_t that serves as time 0?\nTimestamps should be absolute, not relative, in which case I'm not sure you need a 0. There's nothing special about midnight 1 January 1970.\nAh, so you're just suggesting declaring a later date to begin the epoch? Makes sense\nWhy do we need to have a sential time? If it's a precision vs. range issue then shifting the epoch makes sense.\nSorry, I don't know what \"sential time\" is. What are you proposing?\nWhat I'm proposing is that we not treat any times differently. It's not clear to me why 4000 needed special treatment.\n4000 is just an example. The intent of using the epoch was to have common agreement on when tokens expire without any QUIC-LB-specific coordination. The current draft uses 1970 as a timebase, which absolutely makes integers longer than it needs to be. IIUC the PR doesn't have any baseline so it's unclear how the server and retry service are supposed to interpret each other's expiration times. As we're probably speaking past each other: what, precisely, does the integer in this field indicate?\nIt's the end of the validity period for the token in some convenient timebase.\nAlright, we agree on that. So this PR is being a little less precise on synchronization, which is fine, but isn't addressing the fact that we're counting from 1970. Fair enough. Dear QUIC WG, First a moment of unforgivable pedantry. Universal time is unfortunately not the right name for UTC, which is slightly different. It could also mean UT1 or some other variations It's also not clear what the future of computer timekeeping is and I know several places don't synchronize to UTC internally. I think the best way to handle this is to say something like \"synchronized\" and leave open as to exactly how that is done. I agree with the comments that there are too many options and should be fewer. Other then that I didn't see any problems but I'm probably too sleepy right now to spot them. Sincerely, Watson Ladd Astra mortemque praestare gradatim"} {"_id":"q-en-load-balancers-9d417a7eb648026de29c0c626c4094e77f2f51daa74d69affed2eb0c78db0f4e","text":"Definitely an edge case, but the LB will spray stateless resets everywhere because the CID is random. If the LB tracks fourtuples, though, it shouldn't be a problem.\nMaybe make this part of a thing about stateless LBs.\nSo, what's the plan ?\nNAME raises an interesting point about stateless LBs in ."} {"_id":"q-en-load-balancers-1961558cc4d476eabca90147ea740de7cf16882c4b5084a6622ff4d116f7c270","text":"After thinking it through, various XOR schemes were extremely vulnerable to manipulation, so I went with a brute force approach.\nThe server now needs to include this field. For the no-shared state service, we just need language that the Retry service needs to encode enough information validate the packet DCID as well and drop if it fails validation. If it does, the server MUST use the packet DCID in the retrysourceconnection_id TP. For the shared state service, the retry source connection ID is going to have to be in the token. We might be able to compress this by xoring some fields; I'll think about it."} {"_id":"q-en-load-balancers-899c424042421151ce30d05eb862440423cf682903436ccf6c525e1774e8bf7a","text":"I will probably delete the Stream Cipher algorithm entirely, but consider this an intermediate step.\nMerging to enable the next round of edits"} {"_id":"q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628","text":"by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment."} {"_id":"q-en-lxnm-5b6f81cfdb882a63b58a54f1d11990ce31f7a5aaf5496231969a8a85313b3039","text":"Adding a description to signaling options to improve readability. I have pending to update L2TP after the other issue is solved URL\nThanks, Victor. Does the text reflect the latest version of signaling/discovery split?\nleaf l2tp-type { type identityref { base t-ldp-pwe-type; } description \"T-LDP PWE type.\"; }\nThe type should be taken from this registry: URL\nA new IANA-maintained module is created to fix this issue."} {"_id":"q-en-manifest-spec-ec27f01dcb4d81f13a96832401402d85a7dfadad432c245ba924981cdd246298","text":"adds a missing digest parameter corrects CDDL for suit-text removes dangling\nI have discovered this because the EAT build process pulls the SUIT CDDL in for validation using the cddl tool.\nNAME posted to the list the following: I think I found a little typo in the draft-ietf-suit-manifest-14 ... At the \"Appendix A. A. Full CDDL\" of the \"SUITSeverableMembersChoice\", the last two members are pointing to \"SUITCommand_Sequence\" are probably typos as following,\nIn draft-ietf-suit-manifest-04 In the example manifest image-digest parameter is set for component 0 in common sequence. The load sequence sets component-index to 1 and after directive-copy it specifies image-match condition. However, the image-digest parameter has not been set for component-index 1 causing the condition to fail as . Is it expected that parameters are implicitly copied from source component when copy-directive is executed? Should the image-digest be also specified for component-index 1 or perhaps for all components? Another observation in the example is that vendor-id and class-id parameters and corrsponding condition checks are specified for component 0, which seems to be the \"external storage\". Wouldn't it be better to specify the VID and CID globally for the entire device or for the component 1, which seems to be the one running the image? / common / 3:h'a2024782814100814101045856880c0014a40150fa6b4a5 3d5ad5fdfbe9de663e4d41ffe02501492af1425695e48bf429b2d51f2ab45038202582 000112233445566778899aabbccddeeff0123456789abcdeffedcba98765432100e198 7d001f602f6' / { / components / 2:h'82814100814101' / [ [h'00'] , [h'01'] ] /, / common-sequence / 4:h'880c0014a40150fa6b4a53d5ad5fdfbe9d e663e4d41ffe02501492af1425695e48bf429b2d51f2ab450382025820001122334455 66778899aabbccddeeff0123456789abcdeffedcba98765432100e1987d001f602f6' / [ / directive-set-component-index / 12,0 , / directive-override-parameters / 20,{ / vendor-id / 1:h'fa6b4a53d5ad5fdfbe9de663e4d41ffe' / fa6b4a53-d5ad-5fdf- be9d-e663e4d41ffe /, / class-id / 2:h'1492af1425695e48bf429b2d51f2ab45' / 1492af14-2569-5e48-bf42-9b2d51f2ab45 /, / image-digest / 3:[ / algorithm-id / 2 / sha256 /, / digest-bytes / h'00112233445566778899aabbccddeeff0123456789abcdeffedcba9876543210' ], / image-size / 14:34768, } , / condition-vendor-identifier / 1,F6 / nil / , / condition-class-identifier / 2,F6 / nil / ] /, } /, (...) / load / 11:h'880c0113a1160016f603f6' / [ / directive-set-component-index / 12,1 , / directive-set-parameters / 19,{ / source-component / 22:0 / [h'00'] /, } , / directive-copy / 22,F6 / nil / , / condition-image-match / 3,F6 / nil / ] /,"} {"_id":"q-en-mdns-ice-candidates-3eefd3b9871dfc894d55ee70f788535fae750aae9a3a30da2e810e1d7c906fa6","text":"Let's remove this sentence and add more information when we will have this additional experimental data\nThis area probably needs more experimentation in general. For instance, it is unclear how much we could partition such IPv6 addresses by origin like we ask for mDNS names. Given we already talk about this approach in the privacy section, these sentences do not feel mandatory to keep. No strong feelings either way though.\nI looked at the current text more closely and I see what you are saying, especially regarding different origins. We should probably talk about the separation of origins or browsing contexts more in the introduction though. Currently it only refers to the fingerprinting threat, which could be construed to simply mean a long-term identifier.\nI don't think we needed to nuke this whole section, just the future version of the document sentence."} {"_id":"q-en-mediatypes-e629ae3351752c24d479ea3dc313747ff14f535621de14359ac7ce57ec25b388","text":"introduce numbered sections for YAML NAME since each media type is defined in its IANA consideration, would it be ok for you if we just add a section for YAML? Feel free to PR with another proposal.\nNAME could you PTAL ? :)\nto be honest, i prefer definitions to have URIs and not just registrations. but if we have URIs, that's already a lot better than not having URIs! thanks!\nit's maybe just OCD (and because it's very useful for URL to have linkable references) but i think it's nice to have numbered sections for everything that's being defined in the spec. in this case, i suggest to have a numbered section for each of the four media types, and to have a brief explanation of what it identifies. that way, each definition is self-contained and can be easily referenced when necessary. NAME NAME"} {"_id":"q-en-mls-architecture-5179c1163a32eea31c8037a3fd95516ec08246a5ab04a6f4a222ac213d763500","text":"WGLC comments on Operation Requirements section. Added several overlooked requirements\nDiscussed on 12/8 interim, and folks seemed fine with this. I.e., it's ready to merge."} {"_id":"q-en-mls-architecture-a28882e8ca395983f1f0ad87afdd3098b8078c24f60514cb761cbbe5f047d016","text":"We need an IANA considerations section even if we make no requests of them. It's just part of the process and the section can be removed during the final publication phase."} {"_id":"q-en-mls-architecture-0dda54e89bd70dbe37f52dae7eaf39613f7d62df9303204d7e519c31dcbcd7b3","text":"This PR adds a short paragraph referencing the various academic works that has gone into MLS. The references to the various works still need to be cleaned up, which I will do once the list is complete. In particular, each reference needs to be checked for correctness (eprint version vs. publicized version) and then included as informal reference in the yaml header of the document.\nIs there another RFC you can point to that links academic papers on the protocol? I'd expect TLS 1.3 to be a good example of this, but I don't see them do this. The references should be cited properly if we do include it (ie, added as citations at the top of the file)\nThe references are in the appendices. For instance URL The reader should refer to the following references for analysis of the TLS handshake: [DFGS15], [CHSV16], [DFGS16], [KW16], [Kraw16], [FGSW16], [LXZFH16], [FG17], and [BBK17].\nTLS 1.3 as cited by NAME was indeed the inspiration for this. The idea was to just have a small paragraph to point the reader in the right direction and then have the full bibliographic entry in the \"Informal References\" section. As noted above, I intend to properly format all the references once we agree on what references to include, as formatting them into the right yaml is a bit of a pain.\nThanks Konrad!"} {"_id":"q-en-mls-protocol-166a72c4229ad77ca18a37855fcd1e45a78786a3d6ee8c483fc567000ba63a41","text":"Adding this check would be the first step in recognizing that an Update operation may have maliciously excluded a person from the group.\nThis would go better in the \"Ratchet Tree Updates\" section, right after \"Derive secret values for ancestors of that node using the KDF keyed with the decrypted secret\". That would align better with where in the code you want the checks to be. Other than that, LGTM.\nFixed\nThanks Michael !\nHere's the Weekly Digest for : - - Last week 4 issues were created. Of these, 1 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 4 comments. - - Last week, 9 pull requests were created, updated or merged. Last week, 8 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 4 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also ."} {"_id":"q-en-mls-protocol-6e765a7cfe90668bd3353c9fc958f5da23854bdfc592c41cfaad2f8fe50d36cc","text":"This is something we've been meaning to do for a while, and just haven't gotten around to. The Init message directly creates a group with a specified member list. The cost of group creation is O(N) DH operations for the sender, and O(1) for receivers.\nThis now has an OPEN ISSUE regarding the fact that we'll need to add PKE encryption for that message, as agreed out of band, let's merge this and add encryption in the next iteration.\nHere's the Weekly Digest for : This week, 8 issues were created. Of these, 4 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by The issue most discussed this week has been: :speaker: , by It received 9 comments. This week, no pull requests has been proposed by the users. This week, 4 users have contributed to this repository. They are , , , and . This week, no user has starred this repository. This week, there have been 4 commits in the repository. These are: :hammerandwrench: Merge pull request from rozbb/mlsplaintext-signature Multiple clarifications around Framing by :hammerandwrench: Merge pull request from rozbb/hpke-encrypt Clarify how HPKE ciphertexts are computed by :hammerandwrench: by :hammerand_wrench: by # RELEASES This week, no releases were published. That's all for this week, please watch :eyes: and star :star: to receive next weekly updates. :smiley:\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nWe have a long-standing OPEN ISSUE on creating a group with a single Init message instead of N Add messages, which would change the work of initialization from O(N log N) to O(N). I would propose roughly the following: Then the processing would be: Find the welcome that's for you Initialize a one-member tree from the Welcome Use the UserInitKeys to populate the leaves of the tree Use the DirectPath to populate the creator's direct path (as in Update)\nFixed by URL\nSimilarly to the for a new member, this is a PKE operation: the encryption of this message is missing from the description. This message needs to be encrypted to each client otherwise it leaks all public keys of the tree which would otherwise be protected under the common framing. A secondary question appears as a consequence. There is static key reuse for the leaves until they update, we designed HPKE so that it should be fine but I would still add an [OPEN ISSUE] for it."} {"_id":"q-en-mls-protocol-beda0eea6a340cdabd0c5d942dd91c0927e15bac1cc0063e839ef47abb1dc0f7","text":"Here's the Weekly Digest for : - - Last week 3 issues were created. Of these, 1 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 4 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 6 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week, no issues were created. - - Last week, 3 pull requests were created, updated or merged. Last week, 2 pull requests were updated. :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 2 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by - - Last week there were no commits. - - Last week there were no contributors. - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also ."} {"_id":"q-en-mls-protocol-0770df6073d55cd6e3490c9d9165f593ef12812cde0decbc2c7662a78e631d4e","text":"It seems like there's a major thing missing here, namely how the sender of a Commit transmits the new node signatures to the other members of the group. We need something like that before this lands. The most obvious approach would be to add a field to DirectPathNode, as you have with RatchetNode.\nIt would also be handy to include in the intermediate node an indication of which leaf signed it. Otherwise you have to do \"trial verifications\" (cf. trial decryption), which is expensive.\nWas this addressed? it's mentioned that the signature should be transmitted in \"Synchronizing Views of the Tree\" but there's no signature in the transmitted DirectPathNode\nNAME changes the strategy for signing the tree, everything should be sorted out there.\nHere's the Weekly Digest for : - - Last week 8 issues were created. Of these, 0 issues have been closed and 8 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 4 issues were created. Of these, 0 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 14 pull requests were created, updated or merged. Last week, 8 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 6 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 3 contributors. :bustinsilhouette: :bustinsilhouette: :bustinsilhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 11 issues were created. Of these, 1 issues have been closed and 10 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 11 pull requests were created, updated or merged. Last week, 6 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also ."} {"_id":"q-en-mls-protocol-71f5a3575723a39850865f314dbfde2234aec87f952260d69d13043027e6ffbf","text":"It seems like we have a few separable issues here: Whether we require a single signature scheme for the whole group Whether signature schemes are included in the ciphersuite Whether we have an MTI ciphersuite / signature algorithm What new ciphersuites should be defined, and what algorithms should go in them Let's discuss this at the interim and maybe break this PR into a few bits\nNote that specification required registries imply that there is a designated expert pool. I will add some text to create one. The text is very similar to the TLS DE pool text from with some tweaks that, hopefully, address the misunderstandings uncovered during non-standards track registry requests. Note the process is: (assuming that the IESG eventually approves this draft) When the IESG approves this draft, the responsible AD assigns some DEs (like 2-3 suggested by the WG chairs). These DEs are in charge of all registrations regardless of whether the draft comes from the WG or elsewhere as all requests go to the MLS DEs mailing list.\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 7 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 3 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 4 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThis seems to accurately reflect the consensus we had at the last interim, and there's nothing I can't live with otherwise. I will probably file a follow-up to convert the Derive-Key-Pair operations to use HKDF directly, as discussed, avoiding the need for intermediate values or truncation of SHA-512."} {"_id":"q-en-mls-protocol-ef143ef6b440d2ab9d0c3d1baf5aba9e5d67dc469dc472111b6db889c669d26d","text":"You can also strike this sentence further down: \" In particular, for the leaf node, there are no encrypted secrets, since a leaf node has no children.\"\nNote that this severs the connection between the HPKE key in the leaf and the parent of the leaf. Whereas before, the leaf HPKE key was derived from a path secret, and the parent HPKE key was derived from the next path secret. Now the parent path secret is set directly, and the leaf HPKE key is set completely independently, with no dependency on a path secret. So we're losing a bit of internal structure in the tree. But this seems OK because that structure was never observable to the group -- only the holder of the leaf knows the leaf path secret, so nobody else could verify that the leaf was related to its parent."} {"_id":"q-en-mls-protocol-c6032d342710a51b3e7f0758bf1a67e75054d589519c0fd71dcb0f8ca70d0aa1","text":"This PR adapts the Welcome logic so that the new joiners get their path secrets directly, rather than from a DirectPath. This cleans things up in a few ways. The new joiners no longer have to have the context for the previous epoch (which was used to encrypt the DirectPath). And we can get away with one HPKE operation per new joiner instead of two (one for the EncryptedKeyPackage and one for the DirectPath). The major cost is that there's a bit more complexity in the TreeKEM. Instead of just producing and consuming DirectPaths, you now need to know which path secrets go with which nodes. You also need tree math to identify the common ancestor of two leaves, but there's a simple closed-form formula for that, expressed in the Python code here.\nNAME - Curious if this seems implementable to you. Clearly it's possible, since I did it in Go :) But it also needs to make sense to other people."} {"_id":"q-en-mls-protocol-6ac0704a7f25241b4dbdd498b98090544e68acf68315e25178d5e3b01fee46f9","text":"The first commit is editorial. I removed all the DH terminology I could find and replaced it with KEM terminology. The second commit is related to URL It would massively reduce complexity of implementations if people were able to rely on HPKE to do all private key parsing, key validation, public key computation, etc. Currently, all the information in the deleted CFRG and NIST sections is either redundant (performed by HPKE) or outdated and performed correctly by HPKE (e.g., old P-256 DH secret representation). Instead of having all this duplicated, it would be easier to keep it in one place. An issue with the new definition of : Repeatedly calling can be computationally expensive. Each call requires computing the group hash (which, okay, it could be cached) and serializing a struct. It would be nice to do something that's cheaper but still ties context into the secret key generation.\nUpdate: This now uses HPKE's function to do all the heavy lifting"} {"_id":"q-en-mls-protocol-e384eecaf0358bebd2d8057a72b468d3947f9b0550363e1191f592aab51539c9","text":"A couple of minor editorial changes to the current draft, namely: There was a missing grave accent in one of the paragraphs of the Commit section causing some formatting issues Before: ! After: ! Fix typo Pre-Shared Keys section [...], it does not necessarily provide the same [...] guarantees ~than~ as a Commit message. Resolve remaining reference to priorepoch In the Sequencing section there was a reference to a field that was introduced in 2fa6ed3c6afc7d308905255b4cabe154a48d1e4b. However, in 69e12cd61378aae4a81199c28a16e2802666b6e3, it was removed from the struct where it was added. That entire struct was then removed in 27c9c28f9634a06a9b7921ea9c15e276f8c9ff6f, after which the trail runs cold. So I wasn't able to figure out exactly what it must be based on what on what it originally was. Hence, I replaced it with something that makes sense in context. I based the new text on the following line found in the (current) Commit section: Rewrite paragraph on the new_member SenderType I found the phrase \" In such cases\" after the sentence \"Proposals with types other than Add MUST NOT be sent with this sender type\" slightly confusing and awkward. To improve this I rewrote the paragraph.\nCouple of nits on your nits"} {"_id":"q-en-mls-protocol-73f63927eeee2f7ca32b4325672bcab6716b677aadd4e760b21171eb33732dc5","text":"This is a simple rename of to to be more descriptive of the action being performed by the message. I'll open a separate PR after this PIR is merged with updates to the spec where it uses when instead it should say (since not every actually ratchets forward key material, as I understand). Some examples of where these clarifications should happen are: Line 1203 should instead say \"each MLS UpdatePath message\" The sections \"Ratchet Tree Evolution\" and \"Sychronizing Views of the Tree\" should describe the process of generating the key material for an message cc NAME\nDirectPath corresponds to the graph theory term \"direct path\" of a leaf with the root\n\"Direct path\" is still a thing, and the object in question here operates along a direct path, but I think \"UpdatePath\" indicates more clearly what it's doing."} {"_id":"q-en-mls-protocol-216879bd2ab6f3fb8e088cd2c27583bf6fa5be2aa527dc60718ef8abc17e0894","text":"Rotating keys is important and in some cases, Signature Keys are not going to be very \"long-term\". Instead they're going to be rotated periodically. I thus propose we simply remove the assumption that Signature Keys are of a \"long-term\" nature (or otherwise \"long-lived\"). The second change in this PR is that I added a few words indicating explicitly that the identity must not change when updating a KeyPackage."} {"_id":"q-en-mls-protocol-30f896e650c352998f4fa43661c99781f130867d028d9585447d67ed367fc00c","text":"The unsigned PublicGroupState created by leads to an attack on the new joiner. The sender of a GroupPublicState can associate any HPKE it wants with the group, and thus learn the without being a member of the group. (The UpdatePath and the resulting partly defend against this attack, though they are subverted if the attacker has compromised any single key used in the Update Path.) To guard against this attack, this PR adds a signature over the HPKE public key in the PublicGroupState.\nIn light of the feedback on the mailing list from NAME re: deniability, I have upgraded this PR to sign the whole PublicGroupState.\nOutstanding issue: One detail that is not fully covered by is the strategy new joiners should apply to determine who the latest committer is and make sure that both the and the last Commit are signed by the same member.\nThe last node to send an UpdatePath is the last one to set the root. So you can follow parent_hash to find which leaf it is. But like I said on the IETF meeting call, I don't think we should actually require that PublicGroupState be signed by the last updater. There are possible operational/sync issues if the PublicGroupState isn't fate-shared with the Commit, but that's a practical issue, not a security issue. And in any case, it's not a breaking change, so we can fix it post-draft-11 if we decide we need to.\nPlease fix the link on the repo page to display diffs between the current editor's draft and the WG draft. Now, it should PRs comparing back to the -00 version.\nNAME Any change you could have a look at this when you have some time ? : )\nYeah, I'm gonna fix the CI and I can take a look at this as well. I tried to get it done before the interim but, well, excuses excuses\nAhah... ;) No problem!! Thanks !\nNAME -- last call on this, or i'm just going to close it\nSorry. As you will have guessed, this fell through the cracks. I'll try this week to fix it out and close it out if I don't get to it this weekend :/"} {"_id":"q-en-mls-protocol-e43a9d2badbf0c4e63955688b49141402cb607cabfb560d50be22d68fe2feec8","text":"In preparation for feature-freeze / post-first-WGLC living, this PR removes a few issues that have been discussed and pretty much resolved. It leaves a few OPEN ISSUES that could benefit from analysis during the feature freeze."} {"_id":"q-en-mls-protocol-19a0189a526f87d237a1ff89d73a5e1e486a41317cd44f9e7b6b608991ead235","text":"Discussed at 20210526 Interim. Merge.\nGood point about allowing PSKs in a partial Commit!"} {"_id":"q-en-mls-protocol-b0b4fa264be90a07469d69d8243f78fef87f8341cbb9801a6d703daad5240318","text":"draft-08 is the latest HPKE draft, which should be stable. It is also what's being used for interop testing right now.\nDiscussed 20210526 Interim: Merge."} {"_id":"q-en-mls-protocol-0e8b5fd14795f3f8f89ea40025f245d3dd43f9b5f53994689b1fab6977c47a08","text":"NAME NAME r?\nDiscussed at 20211004 interim, remove resynch with a single op (external join + remove in one op). NAME research why resynch was dropped.\nA good point was raised by Jonathon Hoyland during the MLS IETF 109 meeting regarding possible concerns in using external commits for resync, particularly in the case of Alice adding/removing herself. Richard noted that this is a feature in the case that Alice is no longer synchronized with the group and therefore can use an external commit to add herself back in, removing the previous version. As opposed to any newcomer joining with an external commit, the case of Alice re-joining presents a potential security issue. Namely, as currently specified (in my reading of the draft), an existing group member, Bob, has no means to distinguish between the following cases: 1) Alice needs to resync and therefore performs an external commit and removes her prior version. 2) Alice’s signature keys are compromised (it is not necessary for the adversary to compromise any group state). The adversary performs an external commit in Alice’s name, and then removes her prior version and impersonates her to the group. One might hope that Alice notices that she is removed and communicates this to the group members OOB, but it is also possible that that she assumes some other reason for the removal, is offline, or simply is not active enough to take action for a fairly long compromise window. Even if she tries to use an external commit to get back into the group and then removes the adversary-as-Alice, there is no means for other group members distinguish the real Alice from the adversary-as-Alice and the process could be circular (until new valid identity keys are issued). While a newcomer is a fresh source to be trusted or not, Alice has been “healing” along with the group and the above option (2) allows the adversary to bypass all of that. The source of the problem is that when Alice re-syncs, she is not providing any validation of being the same/previous identity, so it is easy for other group members to accept that nothing more than a resync has taken place. Thus, a fairly straightforward solution is to require PSK use in cases where an external commit is used for resync. By enabling a PSK derived from a previous epoch during which Alice was part of the group to be injected with the external commit, Alice provides some proof of prior group membership and we avoid the total reset. This is not quite PCS in that the attacker is active following compromise, but it is a linked case. As such it is important that a conscious decision is made regarding this (either as a slight change before the feature-freeze in line with the other changes that the editors have proposed, or as a working group decision to close the issue as out-of-scope).\nJust to think this with what has been discussed on the mailing list: will give members a way to do the distinction mentioned above.\nNAME - I don't think that's actually the case; it would only allow the members a way to recognize that Alice is replacing herself. I think the right answer here is what Britta suggests, plus a little: We should enumerate a few specific constraints on the proposals that can be carried inline in an external Commit: There MUST be an Add proposal for the signer There MUST be an ExternalInit proposal There MUST NOT be any Update proposals If a Remove proposal is present, then: The identity of the removed node MUST be the same as the identity in the Add KeyPackage There MUST be a PSK proposal of type , referencing an earlier epoch of this group In other words, you can commit things that existing members sent (by reference), but the joiner-initiated proposals must only (a) add the joiner, and (b) optionally remove the old instance of the joiner, with the PSK assurance NAME suggests.\nComing back to this after a few weeks, I'm actually not sure this is a problem worth solving. The attack scenario (2) exists whether or not a member can resync without proving prior membership. As long as (a) the group allows external Commits, and (b) the group does not require a new appearance of an existing identity to present a PSK from an earlier epoch, then the attacker can still do the attack by first joining, and then removing the old Alice. The only difference in the case where the external commit also does the resync is that the two happen together. It also doesn't seem unreasonable to regard the ability to resync from loss of all state except the signature key as a feature, not a bug. Of course, that position would entail accepting that compromise of signature keys would be sufficient to impersonate the user, but that doesn't seem all that surprising, at least to me. So this seems like a question of group policy to me. Either the group requires ReInit PSKs on resync (and you can't recover from all-but-signature-key state loss), or it doesn't (and signature keys are the last resort). This can be documented in the Security Considerations, in a new PR. As far as , I think it's probably still worth adding the precision, but I would change the PSK part to MAY if not remove it entirely.\nThere is also a practical issue with using PSKs to prove past membership, in that if there have been joins since Alice lost state, the new joiners won't have the PSK. This is of course a general issue with the proof-of-past-membership PSK, but it is particularly salient here."} {"_id":"q-en-mls-protocol-ab3aafa1ca3797eb4a5a273f0f7d7a3ec42a2d499a3904baf1fdd133f739a856","text":"This PR adds a RequiredCapabilities extension that allows the members of a group to impose a requirement that new joiners support certain capabilities. The extension is carried in the GroupContext and verified at Add time (including on external commits). In the interim 2021-10-04, we briefly discussed whether this mechanism needed to be included in the core protocol. Because of its interactions with Add and GroupContextExtensions proposals, it seems worth having in the main protocol.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nMany protocols which have an extensibility mechanism have a way to indicate that understanding an extension is mandatory for certain behaviors. For MLS, this would mean having a way to require a member understand an MLS extension before it can join a group. The three extension types defined currently are: MLS ciphersuites, MLS extension types, and MLS credential types. One logical way to add mandatory extensions would be to pick one of these types (the MLS extensions type) and use the high order bit or a specific range to indicate that a member must understand and implement the extension to join a group which uses that extension. The IANA values could be changed as follows: Values from 0x8000-0xffff indicate understanding the extension is required 0x7f00 - 0x7fff Reserved for Private Use (not required) 0xff00 - 0xfff Reserved for Private Use (required) If a required ciphersuite or credential becomes necessary, a new extension type could be created to add an extension in the KeyPackage or GroupInfo which would be required and would indicate support for the mandatory ciphersuites or credentials. In order to prevent some other entity from adding an MLS client that does not support a mandatory extension, we already have the KeyPackage Client Capabilities, which lists which extensions are supported by the client.\nNote that though this might seem \"now or never\", I don't think it really is. Consider TLS by way of analogy -- it has no mechanism for specifying that extensions are required. But if a server gets a ClientHello that is missing some extension that it requires, it can just abort the handshake. Likewise in MLS, the member adding a new joiner to the group can examine that joiner's KeyPackage to see if they support the extensions needed for the group. So I think you could implement this as a non-mandatory GroupContext extension, which would state the extensions required by the group so that all members apply the same requirements for new joiners.\nWhile you can implement this \"out-of-band\", that makes it implicit rather than explicit behavior.\nDiscussed at 20211004 interim, liked this functionality. But, instead of IANA registry use field (or extension) in group context to indicate which extensions are required. NAME to generate PR.\nDiscussed with NAME and incorporated this functionality into PR"} {"_id":"q-en-mls-protocol-d06fd4713ee4fefe311ee787cf3dce73849a62f41ee123c746766768f1471517","text":"Right now, the vector is described in a couple of ways. This PR standardizes on \"of length \"."} {"_id":"q-en-mls-protocol-062cad8d8e5874d2e86e9d6a613eda8f1dc81c2a65fdfa3a4a7dc835184c879d","text":"Any Update proposal for the sender is redundant if the Commit includes a .\nThis is true, but they're also harmless, since the information in the will overwrite all of their effects. Note that: A requirement to this effect would only be binding on a committer. The receiver of a Commit MUST process all of the referenced proposals, as prescribed in the Commit. A is required if a Commit covers Updates or Removes; if a Commit covers a self-Update, then it must have a . So there's no reasn to condition on \"if a is included\". This seems fine for some advisory text, but not an absolute prohibition.\nThe way I read Section 11.2 is that the receiver should verify that the committer has followed the rules, e.g. that the commit contains all valid proposals, etc., even if that is not explicit in the list of things the receiver of a commit should do. So any requirement on the committer would also affect the receiver. So if we mandate that a committer can't include self-updates if the path is populated, then it would be implicit that the receiver checks that the committer actually followed the rules when processing a commit. I'm not sure I understand your second point. My reasoning would be that if a committer wants to do an update, they would just include a path rather than an update proposal, which is redundant anyway.\nSorry NAME I missed that there was still conversation going on here before merging . On the latter point: All I'm saying is that there is no case where there is both (a) a self-update and (b) no . So you're always in the bad scenario :) In our Wire chat, you made a good point that repeated proposals for the same leaf are invalid, and self-Updates are effectively repeats of the . Reopening, will file a fresh PR to adjust the text."} {"_id":"q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017","text":"This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)"} {"_id":"q-en-mls-protocol-43b9eebe04393b51e60d44b3e5f3f931b60a7c07d1f62494f9aee9edd1393917","text":"Signatures as specified in the MLS protocol document are ambiguous, in the sense that signatures produced for one purpose in the protocol can be (maliciously) used in a different place of the protocol. Indeed, we found that for example, a can have the same serialization as , which proves that one signature could be accepted for different structures. The collisions we have found so far are quite artificial, so it is not super clear how to exploit this, but at least it makes it impossible to have provable security. The solution is pretty simple, we simply prefix each signed value with a label, in a fashion similar to . That way, when calling you can give the structure name in the argument.\nSomething that is still bothering me is the structure: struct { select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } } MLSPlaintextTBS; Since the first is done on a field defined after it, this structure is not parsable. We could say that's not a problem since we only need to serialize it, but for security purposes I would recommend to put it at the end, like this: struct { WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } } MLSPlaintextTBS; What do you think about it?\nNice one! Crafting collisions is probably not straightforward, but eliminating the the risk altogether is the way to go.\nIt is actually quite easy to craft collisions, I made a script do find them: URL (I wanted to check whether there was accidentally a way do disambiguate the structures)\nIntroduced in .\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nFrom what I can see, there is no reason not to sign the context in the case of an external init. I'm not sure why it got taken out, but while implementing external inits, verification went through even when including the context.\nAs I commented in , it's not even clear that we need this context any more for members any more, given for MLSPlaintext and the AEAD of MLSCiphertext. So I would propose we just remove the context added to MLSPlaintextTBS altogether. NAME IIRC you were the one that proposed adding the context. WDYT?\nThis was fixed in\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*"} {"_id":"q-en-mls-protocol-7b637b54646b463a263b8d61e348f8b0c50d9c62abf83ca42f506406920fc28c","text":"Adds the overview text proposed in .\nThanks NAME !\nThanks for putting together the overview. The figures are great :-) Looks good to me with one proposition."} {"_id":"q-en-mls-protocol-462ede26f597860eaaddfbba565b537f21017bb4ec96013f937c79fc288f68d4","text":"Make the definition of descendant match the common-sense definition (you are not your own descendant), adjust the text which uses the definition. Note: this allows for a much clearer definition of the tree invariant as well."} {"_id":"q-en-mls-protocol-b716a06b2df9a908a6ed27172a9f16a0067e2e3be37a400adacde4a51914427d","text":"cc NAME\nLooks good to me, thanks!\nComment by NAME Maybe add a sentence saying this means there is no repudiation of messages?\nI would prefer not to say anything like that here, because the degree to which signature implies non-repudiation depends on a bunch of context for how the signature keys are managed. Clearly if you have long term or certified keys, then yes, there's a pretty high degree of non-repudiation. But people are also interested in configuring MLS to be deniable, e.g., by exchanging signature keys over a deniable channel. So I would propose closing this with no action.\nAh, that is a very good point. In my head I was thinking signature means proof of key and key is proven to person. But that might certainly not be the case here.\nThe description of is clear that proposals defined in the RFC are not to be included because they are considered standard, however there is no such clarification around extensions. In general, extension requirements are not all that clear because in several places (such as Lifetime) it is mentioned that an extension must be included, however they are all listed as recommended rather than required in the table. It also raises a question in that if an extension is required to be implemented as part of the specification, and required in every KeyPackage, should it just be a field rather than an extension?\nEven that is not quite clear to me. At least to me it doesn't follow from Section 10.1. The last one at least, I can shed some light on, as we had this discussion before and I was arguing the same thing. From what I remember, the argument was the keeping it an extension rather than a field would force implementers to implement the Extension mechanism. I don't have a super strong opinion here, but I seem to remember that that's what it boiled down to. I generally agree, though, that some things around RequiredCapabilities (RC) and Capabilities in general is unclear. I discussed this with NAME and here are some examples/questions that came up: If a group doesn't have an RC extension, can we assume that all extensions/proposals in the spec are supported by everyone, or do we have to check all KeyPackages? Do we want all extensions/proposals in the spec to be MTI and thus implicitly supported by everyone? It might be useful to distinguish between basic (MTI) ones like Add, Remove, etc. and consider some optional such as ExternalInit or AppAck. Relatedly, should the extension in the KeyPackage list all the Spec proposals/extensions, too? Are they invalid if they do or if they don't? We might want to discuss the difference between RC and a potential group policy. The way I understand it, RC is not meant to be an allow list or a block list, but instead should specify a minimum requirement for all clients in the group. A question here would be: If it's not in the RC, but everyone's capabilities support it, can I use/send it?\nReacting to Konrad's points here: It seems like to to be sure, you always need to check. RC just helps ensure that the check will produce the right answer. I would propose the answer to this be \"Yes, the proposals/extensions in the spec are MTI\" If the answer to the prior question is as I propose, then you don't need to list spec proposals/extensions in Capabilities The difference between RC and checking everyone supports it is whether we expect it to be supported in the future. For Proposals, that might not matter; for GroupContext extensions, it matters more. It also seems like there's a fair argument that RequiredCapabilities should just be removed from the spec and replaced by some application-layer coordinated policy. I would be OK with that. To NAME point about mandatory extensions -- I would probably be OK with converting mandatory extensions to fields, especially with the better division of labor proposed in .\nI kind of like the idea of RequiredCapabilities because it enforces the proper checks lower down. Application layer would need to proxy packets via a middleware type of layer which could potentially be messy.\nDiscussion on working call: Agreement to not list in-spec capabilities in RC/Capabilities, so all those things are MTI"} {"_id":"q-en-mls-protocol-5763198fb352c67e7513d4ca8ed46f1edac0dcf31aafbf40106ba87e638f5b63","text":"This is a minimal section for MIME type registration of message/mls. It assumes that PR include wire_format in all MLS messages (making a format parameter unnecessary), and that all messages have the MLS version number encoded (making a version parameter unnecessary). These parameters can be added if needed.\nWhen MLS messages are transmitted over a protocol which supports Content-Type / MIME Type negotiation, signal that the message is an MLS message. Because MLS messages are really a bag of formats today we should have a MIME type parameter which specifies the specific format included. message/mls;format={applicationgroupcontext}\nThis seems like a fine idea to me. Could go as a section in the protocol spec, or as a separate (small) spec.\nDiscussion on virtual interim: Section or separate doc? How do we handle versioning? Internal to the protocol (field in MLSMessage?) Field in the MIME type Different MIME type per version\nI would prefer a short section in the protocol document. I think Internal to the protocol is cleanest. Likewise the format parameter becomes unnecessary if we add Welcome, PublicGroupState, and KeyPackage to wire_format (see issue )\nNAME send a PR?\nAsking for friend: would we want to define (or use) a structured syntax suffix ala ?\nNAME If you mean for multiple formats, no, we have only one encoding. If someone wants to re-encode the protocol as XML or whatever, a MIME type will be the least of their troubles.\nFixed in"} {"_id":"q-en-mls-protocol-c98d48e928f782b4bea10c845fd2fe7935c7d9676a8c7b0f98737b28b3329f5b","text":"There are two changes in this PR: Pull the extension to the TLS syntax up into the Terminology section Add an extension that lets vectors have variable-length headers, allowing them to be efficient at small sizes while also accommodating large sizes All vectors are converted to the variable-length header representation. It seems like this should simplify implementation, since the encoder will no longer have to be told how many bytes of header to use for a given vector. This has been the source of several annoying interop bugs.\nLooks great! I only have a small objection: there is now several valid ways to serialize a length, as the example shows, both and are valid ways to encode 37. This means that if you parse a structure, and later want to re-serialize it to check a signature for example, you need to remember how many bits were used to store the length. In short, we loose the property that . The fix is easy: can we add that \"the length MUST be stored using the smallest amount of bits possible\"?\nGood point, updated!\nI wasn't clear under which circumstances you would use a longer length with a number that would fit in a smaller range. If never, then the table should be:\nDIscussion on virtual interim Some concern about being able to send gigantic stuff In adversarial cases, the 4-byte headers used for a lot of stuff might already be a problem Could also have length constraints decoupled from the syntax SIP experience shows that imposing arbitrary length limits isn't a good idea ... but applications should have a way to signal that things are too big TODO(NAME Advisory text \"Beware, things can get big; consider imposing limits and clear failure cases for this\" Consider streaming implementations, where you might not know that the whole message fits in memory Modulo the above TODO, clear to merge"} {"_id":"q-en-mls-protocol-5b1e46c511c7c6c0c175dc33c53c102422163008e9740c3b38f0f58785e61f97","text":"This change ensures that the \"init\" key pair associated to the key package is one-time-use, at least to the degree that KeyPackages are one-time-use. The init key pair is used to encrypt/decrypt the Welcome message, then never again. The main drawback is requiring the KeyPackage sender to generate an additional key pair, which requires more computation and more entropy."} {"_id":"q-en-mls-protocol-ae9b605f569d90a71bf652993a28b28017e86c3d29b73372e5741087d4d6a161","text":"Looks good! I would maybe add 1 sentence explaining the rule for what is called a secret and what is called a key. The rule says that a secret will go through some key derivation to obtain a key/keys, is that right?\nNAME Yep, that's exactly right. Added a sentence. Thanks for the feedback everyone!\nReopening due to incomplete discussion and inaccuracy. The claims presented are incorrect in response by NAME demonstrate a misunderstanding of the roles of these keys and uses, and constitutes a change to the fundamental uses thereof. The authenticationsecret is NOT a public value nor was it ever introduced as such. If an application is using it as a public value, that onus is on the developers, and I advise against using Cisco's implementation as a standard for MLS. As shown in the name and all original discussion, the authenticationsecret is intended as a secret value (or else 'public' in the same way that the group secret is 'public). It is used for further derivation to support authentication. This should be corrected, and such foundational changes without WG discussion should be avoided.\nClearly we have a disagreement here NAME but maybe let's hold off on attributing misunderstanding. It seems like you are claiming that in order for MLS to be secure, the needs to be protected to the same level as the . What is this based on? If that is true, then we should probably remove the altogether. The \"out-of-band\" protocol in which it would be confirmed would have to be part of the MLS security analysis, thus no longer out-of-band.\nNAME that is not the claim at all - different uses imply different compromise risks. This should not be a new concept to understand. As demonstrated in the terminology, the authenticationsecret is linked to authentication, not confidentiality. MLS, per specification, outsources authentication (the AS is not explicitly defined and all entity authentication within MLS relies on it). Even though it is definitionally out-of-scope and therefore out-of-band (certificates, authentication verification are entirely out-of-band of the protocol spec), we do not say that signature secret keys can be \"public\" - yet you are asserting that an authenticationsecret which can support epoch-level authentication would be public. The authenticationsecret is private in the same way that the exportervalue is: based on key schedule derivation, compromise of these keys should not undermine MLS, but secrecy of them can be leveraged. So, unless you are now proposing removing mention of certificates, etc. from the core protocol, there really isn't a logical basis for removing 'out-of-band' authentication mechanisms.\nWhat I'm trying to get at here is: What is the negative consequence if an is known by someone other than the members of a group? For example, leaking a signature private key would enable the attacker to impersonate the corresponding member. Leaking the would allow the attacker to decrypt all of the group's messages. What is the bad consequence of an becoming public?\nNAME I am also assuming that this push to use epoch-level authentication values as public codes is based on Cisco's implementation choice. I would be curious to know why there is such insistence that all implementations also go about using authentication in the same way. If an implementation chooses to use it as a secret value to then derive an authentication code based on yet other information, why is that a problem?\nPublishing the authenticationsecret as a public value has a similar effect as publishing the exportervalue as public - it is not an issue for MLS per protocol spec, but it would definitely be an issue for the application if it is then used for something secret thereafter. One reason not to specify these as public values is that they are derived following the conditions placed on keys and therefore could be used as secret values if desired by an application. Forcing or annotating them as immediately public, or as final 'codes', when such a use as a secret key is an out-of-scope potential is artificial and unnecessary.\nI'm not pushing anything, I'm just trying to get the protocol to be clear. If this thing needs to be secret, then it needs to be secret, and we need to document that so that people keep it secret. Thanks for elaborating more on your concerns. I see what you're saying -- basically the same point that NAME and I discussed . I think the core of the disagreement here is what we mean when we say that a value is \"public\" or \"secret\". I mean something very specific: That the security guarantees of MLS will not be harmed if a public value is provided to the attacker, and they will be harmed if a secret value is provided to the attacker. Importantly, \"public\" does not mean that the value must be published, or that the security guarantees of an application using MLS will not be harmed if you publish the value. It seems to me that in that specific taxonomy, the is \"public\" and not \"secret\". Do you disagree?\nBTW, the notion of something being \"public\" at one layer but \"secret\" at another is not new. For example, values exported from TLS aren't secret in the sense that publishing them would compromise TLS. You could authenticate your TLS connection by exporting a value and shouting it through megaphones across a crowd. But the main way people use them in practice is as secrets in some other protocol. Every WebRTC call uses SRTP keys that are exported from TLS, and many people would be very sad if these \"public\" values were published.\nIf it would help, I would be happy to add some text to the Security Considerations clarifying this distinction. Basically, draw a clear boundary around which values have to be kept confidential to the client in order to realize the security properties of the protocol. But I still think is a clearer name. Doesn't mean you have to use it in any particular way in your application; doesn't prevent it being used as a shared secret; but also doesn't imply that the MLS library has to worry about its secrecy.\nWithin the bounds of that particular taxonomy there is not an issue. However, while you may be following a given taxonomy, that is not a general one, one that would be widely understood by implementors, nor is it specified in the protocol spec. It is not an unreasonable taxonomy, but it certainly not a general one. If you are looking for MLS usage spec to align to your taxonomy then I do think that there should be a clarification section added. BTW as a point case on more standard terminology use, the TLS 1.3 spec uses the exact phrasing for exporters as \"exportermastersecret\", from which the actual exporter may be derived. This aligns very closely to the authenticationsecret, from which an authentication code later may be derived. TLS is generic in 'exporters' after an exportermastersecret is derived. Thus there is alignment of the term 'secret' to things that can be used for keys (have sufficient entropy), not as an alignment to public/secret at different layers. I am strongly opposed to the use of authenticationcode, and see no clear definition for that in your customized taxonomy. As such, it would follow the more standard interpretations, such as message authentication codes, etc. which are indeed explicitly public values. Codes are normally an end-state in themselves, not keys. Perhaps it is a clearer name for the Cisco implementation design, but it is not a clear name for something that can/should be used for further derivations as may be more generally applicable. Following the TLS norm for exporters, the name would be \"authenticationmastersecret\". The shortened \"authenticationsecret\" was a good name. If you are going to provide a write-up for the custom taxonomy, then an adjustment to \"authenticationexporter\" could perhaps be made.\nThe notion of a having boundary to the protocol and values that may or may not safely be exported across the boundary is not exotic or customized. You see it in FIPS 140-2, PKCS, and WebCrypto, and in tools like ProVerif and Tamarin. The only choice I'm making is in the specific words, in particular reserving \"secret\" for \"something that should not be exposed outside the protocol implementation because it would compromise the protocol\". I hope you agree that is different from say in this regard. It would be OK for an MLS implementation to have an API for an application to access the , but it would not be OK to have an API to access the . I'm not wedded to , I just want something other than . If you want to align with the exported values (to which I agree it is similar), we could call it an .\nThe distinction between those documents and tools, and the MLS spec is that those tools clearly define the boundaries and terminology, vs. just anyone making terminology assumptions without them being specified in the document. Not to be pedantic, but if you want to assume specific meanings for terms relative to a threat model for MLS only, versus a general usage (\"exporter secrets\" being secrets used in other things), then you really do need to specify those assumptions within the specification itself and not various side discussions. Such specification is part of what a standard is meant to do. Within other documents this is clearly defined - i.e., what is known as a threat model - such as one sees in ProVerif and Tamarin. Assuming that you are going to add such a taxonomy, and that it follows that described by you in , then the correct term would be \"authenticationkey\". To quote you what you said there, which correctly aligns to a reasonable usage of the authenticationkey. pointed out that there is a lack of consistency in the usage of \"keys\" vs \"secrets\" throughout the document - something that still has not been addressed and should not be dropped from the issue for edits. Regardless of what taxonomy is being used or even assumed, there is no consistency in it in the spec. You seem to be commenting on adjustments only for the \"key\" and \"secret\" uses that appear as variables, but the commentary within the spec interchangeably use these terms for the same thing. Whatever is used should be applied consistently. It is not reasonable to assume that developers will correctly guess reasons for terminology separations particularly when they are inconsistently enforced.\nOn your latter point about usage of \"keys\" vs \"secrets\" throughout the document — Sorry if I misinterpreted your intent here. If you're seeing places where there are inconsistencies, it would be helpful if you could highlight those more specifically, or ideally send a PR to fix them.\nNAME had a call about this today and hashed out a couple of things: We should be clear in general about what we mean by secrets/keys, in that we generally use them interchangeably, but sometimes use one or the other to indicate how a specific value is used The need for secrecy of the value labeled depends on the context in which it is used. So we should provide some text to suggest that there are different ways to use it, and use a fairly neutral name. NAME suggested , which I like a lot. I have attempted to implement these in\nThe spec currently uses a mix of terminologies for keys/secrets. In the key schedule we have the following: senderdatasecret | \"sender data\" encryptionsecret | \"encryption\" exportersecret | \"exporter\" authenticationsecret | \"authentication\" externalsecret | \"external\" confirmationkey | \"confirm\" membershipkey | \"membership\" resumptionsecret | \"resumption\" It would be best to provide a coherency to how we use these terms and check for consistency throughout, to avoid confusion for readers: A simple solution would be to use \"secret\" for all cases, i.e., changing membershipkey to membershipsecret.Another alternative, which may take more refining but could be useful to readers is to separate out terms based on items that are for internal protocol use versus external protocol use. For example, encryption, confirmation, membership, etc., would use \"key\" and exporter, authentication, etc., could use \"secret\", or vice versa. --Break-- Ref the mailinglist on confirmation vs authentication secrets, perhaps the naming of these two are a little misleading - authentication normally being within bounds of a protocol while confirmation is external to it. This confusion probably sparked some of that discussion to begin with. It could_ be worth considering switching those terms for clarity.\nI actually think we have a fairly coherent taxonomy here: \"secret\" is something that gets fed into HKDF to derive further things \"key\" is something that gets fed into another algorithm, e.g., HMAC Looking through the list, the ones that deviate from this pattern are: , which is exported to the application but expected to be kept private , which is exported to the application but not expected to be kept private Given that, I would propose we update the table in question to something like the following, using because that's how it's used, and to follow the \"security code\" terminology used by applications:\nThings are easier to understand like this! The \"Purpose\" column is quite valuable I think, I remember I was quite overwhelmed when I first read it and had no idea what all those secret / keys were used for. This helps to understand the big picture."} {"_id":"q-en-mls-protocol-9a7ecca5eafca9466aa04b1d10d46f9ff701c7cb0977cfb5105ab8abe403de00","text":"Also requires that padding be all-zero. Before, the sender was allowed to put arbitrary octets in there.\nLooks good to me. NAME I think this covers your issue on the mailing list\nDiscussed at 19 May interim. Decision to merge.\nAs discussed , we should refactor so that the padding is not length-delimited, something like: In the MLSCiphertextContent struct we have a field for padding represented using a variable-size vector which helps hide information about the length of content; notably application messages. But because of the way we encode variable-size vectors it turns out you can't actually pad the rest of the content with certain number of bytes: namely 0, 65, 16386, 16387. (E.g. if you pad with a vector of 63 bytes the TLS encoding actually produces 64 bytes for padding: 63 bytes + 1 length byte. But if you pad with 64 bytes you end up with a 66 byte encoding as now you need 2 bytes for the length. Not being able to pad with 0 is OK. But 65 is kind of annoying. E.g. if you want to pad to a target content length X bytes then if the rest of the content ends up being X-65 bytes long you are out of luck. (Assuming I'm right about how this all works) it seems a bit silly to me for a padding scheme to have somewhat arbitrary gaps like that; especially at a not unreasonable number of bytes to want to pad with. If I didn't miss something, and you all agree that its better to avoid having gaps like that then how about represent padding as, say, a fixed length vector's? Joël Amazon Development Center Austria GmbH Brueckenkopfgasse 1 8020 Graz Oesterreich Sitz in Graz Firmenbuchnummer: FN 439453 f Firmenbuchgericht: Landesgericht fuer Zivilrechtssachen Graz"} {"_id":"q-en-mls-protocol-d741e87e44eed64a1601ea9920c0a32e1af41e00d210085bd647c0f8355ddd98","text":"The way the spec is written now, clients need to implement all proposal types, and they must react to appropriately to all valid proposals, including AppAck. AppAck in a large, busy group could result in a situation where clients with longer latency connections to the DS (perhaps in a different country, using satellite, or using some wireless networks) might be effectively prevented from sending any messages, or worse could be prevented from sending AppAcks because their proposals are always a few hundred milliseconds late. I think AppAcks still have a place is relatively small groups with low message volumes, but I have a few suggestions: 1) Describe this issue either in the AppAck proposal section or in the security considerations section 2) I think it is reasonable for a group to consider some proposal types invalid. If there is consensus with this position, I'd like to make this clear with an extra sentence or two. If that is not the consensus, then I will propose that only Add, Remove, and Update are assumed to be supported by each group, and other proposal types have to be explicitly added.\nMy team had similar comments. I agree that this shouldn't be a 100% necessary thing to support.\nI would differentiate between \"support\" in the sense of \"if you receive one, do something sensible with it\" vs. \"send this under some circumstances\". My understanding was that the requirement here was the former, and even that the definition of \"sensible\" was up to the implementation. The thinking being that if an application wanted to get some anti-suppression properties, it could send and verify these on some schedule, but when exactly they would be sent would be up to the application. Does that alleviate some of the concern here? If so, I would totally be open to clarifying that. To NAME point about latency, it seems like we could add an parameter to AppAck so that you could report on past epochs' messages in a future epoch. On considering proposal types invalid -- I think there has always been a notion that applications might have policies on which proposals are acceptable. Not just at the proposal type level, but at even finer granularities. Think about things like groups where only certain members are allowed to add/remove participants. I would also be fine laying this out more explicitly, though maybe it would be a more natural fit in the architecture document.\nSorry, meant to delete a comment, closed the issue instead.\nIn the past I've expressed reservations about AppAck being included in the spec. I don't think it's sufficiently general (it only handles a one specific pattern of message loss and won't work for others, per NAME comment), and I don't like that we don't specify how to handle dropped messages in any way. The AppAck functionality is likely to just be re-invented at the application-level because of this. For example, AppAck can't be used to implement read receipts It doesn't help detect the suppression of handshake messages It doesn't handle cases where some messages must be delivered and others may be dropped So instead I would propose removing AppAck from the spec.\nI've opened a PR to that effect in .\nVirtual interim 2022-05-19 Agreement that the bar for support is quite low ... but you don't get a security property unless members actually do something more than the minimum That sort of implies that you want the group to opt in to this mechanism So we should have a new proposal type in a separate I-D\n(To be clear, AppAck should move from the main spec here to its own I-D.)\nOne comment that renumbering the groupcontextextensions proposal was probably a bad idea from a process perspective which we should avoid doing in the future. In this case it seems unlikely that many people have implemented it who could not make a breaking change."} {"_id":"q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8","text":"This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR."} {"_id":"q-en-mls-protocol-3280ecb8060e0eb2f7ebffbb78a94c6cae848dfe42efcbdcdaed74339b1aeaff","text":"Brendan to revise based on Rohan suggestion.\nInterim 2022-05-26: Concerns around making the group_id totally random Malicious group creator can choose it malicious Federated systems get worse randomness Using AEAD.Nk depends on ciphersuite, could be awkward NAME will revise to say: Here are the requirements that group ID must satisfy If you don’t have an alternative plan, do it randomly\nGroup ids are signed to bind certain information to specific groups. But I can imagine a case where a user mis-uses the group id to be something like a \"room title\" which lets an adversary control the group id and introduces potential problems. If we can prove that it's fine for a group id to be reused, then should we get rid of group ids? If not (as I expect), then group ids need to be guaranteed to be unique [globally per signing key] and we need to specify that.\nDiscussed at 19 May interim. NAME to generate PR documenting fresh random value approach.\nVirtual interim 2022-05-19: Agreement that some sort of uniqueness requirement is worthwhile Main question is what scope of uniqueness should be Globally? Within scope of DS? Fresh random value? Simple, but precludes application usage of the field Also could help avoid metadata leakage Client / DS should probably still check for collisions NAME to write up a PR on the fresh random value idea\nGroup IDs must be generated randomly by creator Clients and DS must check that group ids are unique as far as they know"} {"_id":"q-en-mls-protocol-87b452c307d38f47ec9b624e8bf742e8cdb5ca1f97459b28faa17ad6246d68b4","text":"Replaces This PR attempts to add precision to the credential validation requirements in the document, along the lines NAME suggests in , in the spirit of what NAME wrote up in and drawing inspiration from what RFC 6125 does in the context of TLS. It seems like this gives a clearer definition of (a) what the AS's job is and (b) when it needs to be done.\nWhile this is spread out throughout the document, it would be beneficial to spell out in which scenarios a Credential MUST be verified (with the AS): When fetching a KeyPackage to add a new member When discovering a KeyPackage in an AddProposal from another member When joining a new group (verify all other members' Credentials), both when receiving a Welcome message and when joining through an External Commit When a member updates its leaf ... anywhere else?\nThanks for picking it up!"} {"_id":"q-en-mls-protocol-d64ef1dda64f2f350de84c45388177658a0edfd93659c4f75640e7923d582e91","text":"As NAME pointed out on the mailing list, this reference does not need to be normative."} {"_id":"q-en-mls-protocol-e35d005193d8fb3b1d8a423bc65c7379e379c017d47ba249349df8b67b37e24a","text":"NAME thanks, good catch. should be fixed now.\nComment by NAME Algorithm entries use two octets (65535) but only allow 255 Private Use entries. I think that might be a little low. Maybe allow 0xf000 - 0xffff or something ?\nI agree this should be changed. CCing NAME who I believe initially proposed this."} {"_id":"q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5","text":"As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism."} {"_id":"q-en-mls-protocol-7555d85cf35b20dc253ccb4cd49a9319a2c7047ad2c27afae2c6923871e4d6f5","text":"This updates the language around how certificate chains are constructed: We allow certificates to be omitted if the client knows any recipients will already possess the omitted certificate. We allow certificate chains to be out-of-order and potentially contain unused certificates. This is nice because it means less data needs to be sent on the wire, and also provides a method for root certificates to be rotated gracefully (as clients can provide chains for both old and new roots).\nNAME it might be useful to also add this language from RFC 8446. It would make the process exactly the same as TLS which makes implementation easier. >The sender's certificate MUST come in the first >CertificateEntry in the list. Each following certificate SHOULD >directly certify the one immediately preceding it. Because >certificate validation requires that trust anchors be distributed >independently, a certificate that specifies a trust anchor MAY be >omitted from the chain, provided that supported peers are known to >possess any omitted certificates.\nI think that is fine and is consistent with RFC8446. Don't we already have a better mechanism for rotating certificates? As soon as a member has a new certificate chain, it can update its LeafNode in an UpdatePath. This semantics of X.509 certificate bundles are already a bit fuzzy. Such a change will make validation more complicated and more error prone, when this seems completely unnecessary.\nThe scenario that this would address is: say that a root CA is compromised and needs to be replaced. Distributing the new CA certificate likely requires a software release, which not everyone will install simultaneously. During the rollout, some users will know only the old compromised CA and some users will know only the new CA and not the compromised one. To preserve interop during the rollout, users need to include two intermediates in their certificate chain: an intermediate signed by the old root that signs their leaf, and an intermediate signed by the new root that signs their leaf. (In the exact scenario I'm thinking of, the intermediate private key is held by the user, so it would not be compromised with the root. Though there may be other arrangements where this flexibility helps.) In terms of implementation complexity: the X509 chain validation libraries I'm aware of already support path building because it's necessary for TLS, so I would expect those could be used\nThis is actually the reason why I think it makes sense to adopt the same mechanism. For example, if you use OpenSSL to validate the chain it does not take a chain as input. It takes a collection of certificates, a set of trusted root certs, and the leaf you want to validate. I believe not making this change has a higher complexity.\nI'm sorry I missed this previously, but this requirement was a source of trouble in TLS and was removed in RFC 8446, which allows extra certificates in the middle. Was there a conscious decision to be more strict here.\nI don't recall this ever being discussed. I think it was just added and nobody ever objected.\nNAME can you elaborate a little bit? The way I read it the requirement is just that there is a valid signature chain, i.e. that one certificate signs the next and so on.\nNAME - I deliberately reverted to the original TLS level of strictness here, in hopes that in starting over with a new ecosystem we could reset things to a cleaner state. Do you think there are legitimate reasons for violating the strict rule, or is it just a common misconfiguration in TLS servers? I would not really be inclined to accommodate the latter. NAME - I believe what NAME is envisioning are cases where there are multiple valid chains, say via old and new versions of an intermediate certificate. So you might do something like provide a \"chain\" of the form (leaf, oldintermediate, newintermediate), with the idea that the client could verify using whichever one makes it happy.\nNAME take a look at the comment I left on the PR. I can see your point about starting over as being valid, but one thing that also caught my eye in the TLS RFC was language around allowing a root CA to be left out of the chain since those could potentially already exist in a local trust store.\nI see. Thanks NAME for the clarification. Maybe we can phrase it in such a way that whatever the credential includes (added intermediary certificates or left-out root certificates) in the end the local client MUST be able to create a continuous certificate chain from the leaf to the root. Do we need to define how the client does that? Since we're already pushing everything else related to authentication into the AS, I'm inclined to say no."} {"_id":"q-en-multipart-ct-f3041d71e0a20de680339b090ae14c354ac857d0166e32fb533bf6c9aeff7eb0","text":"…oundaries defined by the CDDL\nWe should add something like: \" within the syntactical boundaries defined by the CDDL in Figure 1.\""} {"_id":"q-en-multipath-fa676345baae2c66e20b61d5eed74518130e63633b8c531bbd5aba681d6e4490","text":"This derives from the discussions on issue\nThe commit applies the changes suggested in the previous reviews. With those changes, I think we have a good basis for discussing a unified solution in the working group.\nThanks NAME for the review. I have applied the editorial changes. You are asking for something more regarding the \"specific logic\" required when supporting zero-length CID. There are in fact two pieces to that: logic at the receiver, i.e., the node that chose to receive packets with zero-length CID; and logic at the sender, i.e., the node that accepts to send data on multiple paths towards a node that uses zero-length CID. This is explained in details in the section \"Using Zero-Length connection ID\", so I guess what we need is a reference to that section.\nComment by NAME (on PR ): I think we expand this section a bit as we found getting the delay right was non-trivial. QUIC time-stamp would work for sure. But if one decides not to use QUIC timestamp, we probably want to explain what is needed to be done here.\nI have submitted PR for this one.\nClosing as got merged.\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue.\nThis draft initially originates from a merging effort of previous Multipath proposals. While many points were found to be common between them, there remains one design point that still requires consensus: the number of packet number spaces that a Multipath QUIC connection should support (i.e., one for the whole connection vs. one per path). The current draft enables experimentation with both variants, but in the final version we will certainly need to choose between one of the versions.\nThe main issues mentioned so far: Efficiency. One number space per path should be more efficient, because implementations can directly reuse the loss-recovery logic specified for QUIC ACK size. Single space leads to lots of out of order delivery, which causes ACK sizes to grow Simplicity of code. Single space can be implemented without adding mane new code paths to uni-path QUIC Shorter header. Single space works well with NULL length CID The picoquic implementation shows that \"efficiency\" and \"ack size\" issues of single space implementations can be mitigated. However, that required significant improvements in the code: to obtain good loss efficiency, picoquic remembers not just the path on which a packet was sent, but also the order of the packet on this path, i.e. a \"virtual sequence number\". The loss detection logic then operates on that virtual sequence number. to contain ACK size, picoquic implements a prioritization logic to select the most important ranges in an ACK, avoid acking the same range more than 3 or 4 times, and keep knowledge of already acknowledged ranges so range coalescing works. I think these improvements are good in general, and I will keep them in the implementations whether we go for single space or not. The virtual sequence number is for example useful if the CID changes for reasons not related to path changes in multiple number space variants. It is also useful in unipath variants to avoid interference between sequence numbers used in probes and the RACK logic. The ACK size improvements do reduce the size of ACKs in presence of out of order delivery, e.g., if the network is doing some kind of internal load balancing. On the other hand, the improvements are somewhat complex, would need to be described in separate drafts, and pretty much contradicts the \"simplicity of code\" argument. So we are left with the \"Null length CID\" issue. I see for cases: Client CID Priority ----------------------- long Used by many implementations NULL Preferred configuration of many big deployments long Rarely used, server load balancing does not work NULL Only mentioned in some P2P deployments The point here is that it is somewhat hard to deploy a large server with NULL CID and use server load balancing. This configuration is mostly favored by planned P2P deployments.\nThe big debate is for the configuration with NULL CID on client, long CID on server. The packets from server to client do not carry a CID, and only the last bytes of the sequence number. The client will need some logic to infer the full sequence number before decrypting the packet. The client could maybe use the incoming 5 tuple as part of the logic, but it is not obvious. It is much simpler to assume a direct map from destination CID to number space. That means, if a peer uses a NULL CID, all packets sent to that peer are in the same number space.\nRevised table: Client CID What -- -- long Multiple number space NULL Multiple number spaces on client side (one per CID), single space on server side long Multiple number spaces on server side (one per CID), single space on client side NULL single number space on each side If a node advertises both NULL CID and multipath support, they SHOULD have logic to contain the size of ACK. If a node engages in multipath with a NULL CID peer, they SHOULD have special logic to make loss recovery work well.\nI think the above points to a possible \"unified solution\": if multipath negotiated, tie number spaces to destination CID, use default (N=0) if NULL CID. use multipath ack; in multipath ACK, identify number space by sequence number of DCID in received packets. Default that number to zero if received with NULL CID. in ABANDON path, identify path either by DCID of received packets, SCID of sent packet, or \"this path\" (sending path) if CID is NULL.\nI like the proposal of the \"unified solution\". I think the elegance lies in the fact that it allows us to automatically cover all four cases listed above. The previous dilemma for me was that on one hand we have some use cases where we need to support more than two paths and separate PN makes the job easier, but on the other hand, I think we should not ignore the NULL CID use cases as it is also important. Now, with this proposal, a big part of the problem is solved. The rest of challenge is to make sure single PN remains efficient in terms of ACK and loss recovery. On that part, we plan to do an A/B test and would love to share the results when we get them. There is one more problem as pointed in issue , when we want to take hardware offloads into account. In such a case, we may still need single PN for long server CID. However, if hardware supports nonce modification, this problem can be addressed with the proposed \"unified solution\".\nIf the APi does not support 96 bit sequence numbers, it should always be possible to create an encryption context per number space, using Key=current key and ID = current-ID + CID sequence number. Of course, that forces creation of multiple context, and that makes key rotation a bit harder to manage. But still, that should be possible.\nThanks for the summary NAME I think one point is missing in your list which is related to issue . Use of a single packet number space might not support ECN. Regarding the unified solution: I think what you actually say is that we would specify both solutions and everybody would need to implement both logics. At least regarding the \"simplicity of code\" argument, that would be the worst choice. If we can make the multiple packet number spaces solution work with one-sided CIDs, I'm tending toward such an approach. Use of multiple packet number spaces avoids ambiguity/needed \"smartness\" in ACK handling and packet scheduling which, as you say above, can make the implementation more complex and, moreover, wrong decisions may have large impact on efficient (both local processing and on-the-wire). I don't think we want to leave these things to each implementation individually.\nThe summary Christian made above about design comparison sounds indeed quite accurate. Besides ECN, my other concern about single packet number is that it require cleverness from the receiver side if you want to be performant in some scenarios. At the sender-side, you need to consider a path-relative packet number threshold instead of an absolute one to avoid spurious losses. Just a point I think we did not mentioned yet is that there can be some interactions between Multipath out-of-order number delivery and incoming packet number duplicate detection. This requires maintaining some state at the receiver side, as described by URL With single packet number space, the receiver should take extra care when updating the \"minimum packet number below which all packets are immediately dropped\". Otherwise, in presence of paths with very different latencies, the receiver might end up discarding packets from a (slower) path. I'm also preferring the multiple packet number spaces solution for the above reasons. I'm not against thinking for a \"unified\" solution (the proposal sounds interesting), but I wonder how much complexity this would add compared to requiring end hosts to use one-byte CIDs.\nI think the issue is not really so much \"single vs multiple number space\" as \"support for multipath and NULL CID\". As noted by Mirja, there is an implementation cost there. My take is: If a node uses NULL CID and proposes support of multipath, then that node MUST implement code to deal with size of acknowledgements. If a node sees a peer using NULL CID and supporting multipath, then that node MUST either use only one path at a time or implement code to deal with impact on loss detection due to out of order arrivals, and impact on congestion control including ambiguity of ECN signals. Then add sections on what it means to deal with the side of acknowledgements, out of order arrivals, and congestion control. I think this approach ends up with the correct compromises: It keeps the main case simple. Nodes that rely on multipath just use long CID and one number space per CID. Nodes that use long CID never have to implement the ACK length minimization algorithms. Nodes that see a peer using NULL CID have a variety of implementation choices, e.g., sending on just one path at a time, sending on mostly one path at a time in \"make or break\" transitions, or implementing sophisticated algorithms. If we do that, we can get rid of the single vs multiple space discussion, and end up with a single solution addressing all use cases.\nLooking at all the discussions here and in other issues such as ECN, I think that we should try to write two different versions of section 7: one covering all the aspects of single packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) one covering all the aspects of multiple packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) There would be some overlap between these two sections and also some differences that would become clear as we specify them. At the end of this writing, we'll know whether it is possible to support/unify both or we need to recommend a single one. The other parts of the document are almost independent of that and can evolve in parallel with these two sections. However, I don't think that such a change would be possible by Monday\nI don't think we should rush changes before we have agreed on the final vision.\nIt might be helpful to have these options as PRs (without merging) them, so people can understand all details of each approach.\nI agree with NAME that supporting multi-path with null CIDs is a more fundamental issue than the efficiency comparison between single PN and separate PN, as it would ultimately impact the application scope of multipath QUIC. But indeed, we might want to implement proposed solution first and then decide if we want to adopt such a unified approach.\nSince NAME prodded me, we now have a PR for the \"unified\" proposal.\nI totally agree with Christian that the issue is about \"support for multipath and NULL CID\", and the solution that Christian suggested looks really great! It both takes advantage of multiple spaces, and support NULL CID users without affect the efficiency of ACK arrangements. Besides, the solution is more convenient for implementations, because If both endpoints uses non-zero length cids, endpoints only need to support multiple spaces, and if one of the endpoints use NULL CID, it could use single pn space in one direction and could support NULL CID and multipath at the same time.\nVery high level summary of IETF-113 discussion seems that there is interest and likely support for the unified solution (review minutes for further details)."} {"_id":"q-en-multipath-b626e5afe475cf20c3f314ade864d8a56b9173526d859beee71904b5ceea66b1","text":"This PR is to resolve issue . The description of the path-closing figure is rephrased for better clarity. The pathidtype field in the PATHABANDON frame is also added in the figure to give an idea how the type is used. Moreover, one case for single PN is added (where the client receives zero-length CID while the server receives long CID). According to previous discussion, in the single PN case, only the client needs to send out a PATHABANDON. The server SHOULD stop sending new data on the path indicated by the PATHABANDON frame after receiving it. However, The client may want to repeat the PATHABANDON frame if it sees the server continuing to send data. It is optional for the server to respond with a PATHABANDON after it receives a PATHABANDON frame from the client.\nThere is one part I am afraid people might feel a little bit hard to follow: the three types of path identifiers in the PATH_STATUS frame. Those three types of path identifiers were introduced in order to be compatible with zero-length CID use cases, so I think we might need two additional example figures (like figure 3 in the current draft), one for (non-zero length CID, zero-length CID) and the other one for (zero-length CID & zero-length CID). Do you think that is a good idea? I can work on those figures if it doesn't complicate the flow of the draft.\nThis might be indeed useful in the \"Example\" section.\nYes, please create a PR!\nNAME I am editing the path closing figures with zero-length CID and single PN. Is it similar to what you have in mind? {: #fig-example-path-close2 title=\"Example of closing a path when the server chooses to use zero-length CID\"} {: #fig-example-path-close3 title=\"Example of closing a path when the both client and server choose to use zero-length CIDs.\"}\nIn the case of single number space, we use ACK, not ACKMP. But in fact you do not need to draw the ACKs there, there is no special processing of ACKs required. Also, there is no need to wait for an idle timeout, which typically is 10 to 30 seconds. That's too large. Do we really need to send \"abandon path\" in both directions? If I think about it, the real concern is that client will send a PATHABANDON message, but some time later, due to differences in delay, etc., the server receive data on the same four tuples. How does the server know whether it should create a new path or not? In the single space case, the logical test may be to check the sequence numbers of the packet. If the sequence number of the new packet is greater than that of the path_abandon, then the client is reactivating a path that was abandoned previously. Maybe the client connected again to the same network, after some disconnected interval. The server can start sending on the reactivated path. If the packet number is lower than that of the path abandon, then the server should not reactivate the path. In the multiple space case, if the CID is defined, there is no ambiguity.\nACK works for me. I think we might also want to add a sentence in the Sec. of \"one packet number space\" explicitly specifying that \"for single PN, use ACK instead of ACKMP\", and change some texts in the ACKMP frame section that is related to zero-length CID (the current draft says \"if the endpoint receives 1-RTT packets with 0-length Connection ID, it SHOULD use Packet Number Space Identifier 0 in ACKMP frames\". ). I think in single PN, sending \"abandon path\" in both directions is helpful. When the client sends \"PATHABANDON\", it promises not to send non-probing packets in the current activity cycle. It could send probing packets to reactivate the path a moment later if the client thinks the network environment has changed. After sending out \"PATHABANDON\", the client should still be able to receive packets from the server for a while on that path because there may be the server's inflight packets on that path. It was not until the client sees the server's \"PATHABANDON\", the client then knows that no more server's data is to be expected except for some out-of-order packets. So after receiving \"PATHABANDON\", the endpoint can start preparing to free memories and clear path stats allocated for that path. The time interval from seeing \"PATHABANDON\" to actually releasing relevant resources is going to be an important parameter, and yes, 10-30 seconds is too long. What would be a more appropriate number for this interval? Does it imply that we need to track the packet number of the PATH_ABANDON for a path that is closed? When a path is in a closed state, the server may reject any newly received packets unless it is a probing packet (back to our path initialization process). In this way, it should be OK to forget any info related to that path in the last activity cycle.\nIn most similar situations, RFC 9000 uses a delay of 3xRTO. Note that in the \"simple\" case, the client can receive packets from the server on any tuple. There are no resource allocated per path for receiving packets. There will be local resource allocated for sending packets, congestion context for example. But not on for receiving: all packets can be decrypted and processed. If the client knows that it will not send, it can discard the resource allocated for sending without risking a side effect.\nI changed the timeout to 3RTO. As the problem is similar to FIN-ACK&FIN-ACK when TCP closes, I kept the ACK in the graph (because the pathidtype=2 refers to the path where the packet is traveling, we want to make sure it is acked before we close the path.). Regarding the resource, I think if we keep a unified receive record, which is not per-path, yes, we can discard the sending resources. But if we have some path stats (e.g. how many packets received in total in this current activity cycle), we need to keep these data structures until we are sure that the peer is not going to send more packets on that path. Indeed, it depends on the implementation, but I feel it is generally helpful to have PATH_ABANDON sent in both directions. What do you think? {: #fig-example-path-close2 title=\"Example of closing a path when the server chooses to use zero-length CID\"} {: #fig-example-path-close3 title=\"Example of closing a path when both client and server choose to use zero-length CIDs.\"}\nI submitted PR . Let's first have sth that we can start modifying.\naddressed by PR\nI think there is still a mismatch between the proposed examples and the definition of the pathidtype in Section 11.1 (definition of PATH_ABANDON frame)."} {"_id":"q-en-multipath-2bd6800311ef13ef411fd64953778859f4f44e724ab99673d680ed426008162b","text":"Fix issue\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue."} {"_id":"q-en-multipath-a5c441deed2da8d23cb0197fe0ca7158f506db44715d8e8518c5a44fde67abff","text":"I believe we can merge this now as it is. If we want to add more guidance later we should open a separate issue and another PR.\nThis issue came up during the discussion of issue . We say that ACK or ACK_MP frames can be send over any path, however, maybe we should discuss some guidance and trade-offs. E.g. sending on the same path as the packets received also for RTT measurements; using the shortest delay path might have some performance benefits...?\nI agree. We should add some guidance here. We did some experiment regarding this issue. Our finding is returning ACK on the shortest path is helpful. But the improvement is only more significant when the path rtt ratio is larger than 4:1."} {"_id":"q-en-multipath-bc6d58ca5ba771a100e9a6761f1d0f9f18812b6485a82b97cb8069869d5dd61d","text":"Fix issue\nIn section 3, we have However, the error code does not exist (anymore), and I'm not sure we need such a specific error code anyway. Why not raising a in such case?\nIt's reasonable to use TRANSPORTPARAMETERERROR. Submitted PR for this issue."} {"_id":"q-en-multipath-85775c2c1bb65462a4df1b4e1b168e0edfcb8440ee22d17cc0ff3f52275ec872","text":"as this is already stated normatively in RFC900.\nCurrently the draft say: However, we should not use normative language in this draft but just point to RFC9000 instead as things are defined normatively there already. Btw. this reference should point to section 13.3 (\"Upon detecting losses, a sender MUST take appropriate congestion control action.\") and 9.4 (\"Packets sent on the old path MUST NOT contribute to congestion control or RTT estimation for the new path.\") of RFC9000 specifically and maybe also to RFC9002 directly.\nLGTMLGTM"} {"_id":"q-en-multipath-86ee771a8ecf97de1663331f8b9b92b883023bc0588ac8c3999df849b4a72935","text":"Not sure to understand why the last part of the sentence is an issue. The use of zero-length connection IDs is problematic anyway in setups with NATs, for example.\nI agree that in many deployments you want at least one CID from the client to server. Multiple packet numbers as currently defined however requires CIDs in both direction; there is separate issue to discuss that. The next sentence also further explains that there might be other deployments though, e.g. in constraint networks, that might also benefit from saving the bits of the CID. This one sentence is of course only a very brief summary of the pros and cons of the different approach and more discussion is needed in the group. So I don't expect to have this part in the draft like this when we finally publish it as RFC. Is there anything in the issue that we need to address now? Or can we close this issue?\nThanks, Mirja. That's what I expected, but I do still think more elaboration is needed for this text to be useful. I would suggest to make this modification and leave the discussion of pros/cons to be adequately documented in Section 7: OLD: This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connection ID. Therefore use of a single number space can be beneficial in highly constrained networks that do not benefit from exposing the connection ID in the header. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: URL URL NEW: This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: URL URL\nYou are suggesting to strike out the sentences that try to summarize the differences, \"While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connectionID. Therefore use of a single number space can be beneficial in highly constrained networks that do not benefit from exposing the connection ID in the header.\" I think that's a good idea. The sentences are a short summary of the arguments presented in the powerpoint, which are not limited to \"highly constrained networks\". For example, the Chrome browser, by default, uses NULL CID. I may be placing words in the mouth of Google developers, but I believe their goal is to reduce the overhead of the QUIC header compared to TCP/TLS. This is related to a general quest for performance, not to network constraints."} {"_id":"q-en-multipath-9516913c9bf0a5ce1091b37af29bb82786ff0eaca6a5e9f77be3ffd7826091bd","text":"NAME you wrote: That sounds like you want to restrict the multipath extension to only one server address (but potentially different paths from different clients address)...? I think this restriction is unnecessary. Yes we don't specify a way to learn server addresses but that doesn't means that the client doesn't know (e.g. based on higher layer information) multiple addresses of the server. What should we restrict that?\nHowever, can we maybe merge this PR as it is for now and you open a separate issue for your additional point NAME ?\nMirja, That was my understanding of the sentences in the PR. Maybe I got it wrong. I think that we should discuss the role of preferred address in section 3 where we explain how paths can be opened.\nFine, let's do it this way.\nRFC9000 defines a list of transport parameters. 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08 0x09 0x0a 0x0b 0x0c 0x0d 0x0e 0x0f 0x10 [QUICWG] These transport parameters apply to all the paths that compose a Multipath QUIC connection. We should probably confirm this in the mpquic draft to avoid any misinterpretation (e.g. for maxidletimeout or ack delays). There are two transport parameters that could warrant some discussion: disableactivemigration : This transport parameter is incompatible with enablemultipath. An endpoint that receives both this transport parameter and a non zero enablemultipath parameter MUST treat this as a connection error of type MPCONNECTIONERROR and close the connection. preferredaddress: RFC9000, section 9.2.1 contains the following: Once the handshake is confirmed, the client SHOULD select one of the two addresses provided by the server and initiate path validation (see Section 8.2). A client constructs packets using any previously unused active connection ID, taken from either the preferredaddress transport parameter or a NEWCONNECTION_ID frame. As soon as path validation succeeds, the client SHOULD begin sending all future packets to the new server address using the new connection ID and discontinue use of the old server address. If path validation fails, the client MUST continue sending all future packets to the server's original IP address. For multipath, do we want to be stronger than SHOULD select one ? Can a client create news paths towards the initial address or only to one of the preferred addresses ? Should the client use both addresses (IPv4 and IPv6) if non-empty ?\nGood point about \"disableactivemigration\". I would read it as also disabling multipath. We do not need to say that \"transport parameters apply to all the paths that compose a Multipath QUIC connection\". We are already inheriting that property from compatibility with path migration in RFC 9000. Multipath support is an extension to QUIC v1, not a new protocol, so we do not need to specify all the defaults. Same goes for the preferred address -- meaning does not change just because the nodes negotiate multipath. I would avoid debating \"which server addresses the client can try\" in the base multipath document. If we want the server to signal plausible addresses to the client, let's have a specific document for that.\nI would maybe add a sentence that this draft does not change the meaning of preferred_address and if that address could also be used for multipath is out of scope for this extension.\nRegarding disableactivemigration: this cannot be a connection error, as you can also send preferred address and after migration to the preferred address you can still use migration or multipath (if negotiated). I think the definition in RFC9000 is still valid; it means you can't send any packets from another address than the one used in the handshake. Which effectively means disabling multipath as well as Christian noted (if not moved to another preferred address). I guess we could explicitly note that this draft does not change the definition of disableactivemigration. For your reference this is what RFC9000 says:\nI created a PR in . Is this sufficient to address this issue or do we need to say more? NAME ?\nThe proposed PR sounds good to me, with the slight addition I mentioned."} {"_id":"q-en-oblivious-http-95a72363743c12351f6cdae762e12bfa22cf49bf02e3ff950e03891887158fdc","text":"This seems fine to me, but doesn't seem to include URL\nNAME after trying to work that text in it became clear (to me) that a better outcome was modify the existing text we have about anonymity set reduction. So I went with that approach rather than open the can of consistency worms that NAME alluded to.\nLet's say HPKE public key A was valid from 2022-07-01 to 2022-07-02. The client encapsulates a request X using pubkey A, but buffers it because of network issues, in order to retry later. Now it rotates and public key B is valid as of 2022-07-03. X encapsulated using pubkey A would not be decryptable by pubkey B. So if you receive X on 2022-07-03, then you know that X was generated between 2022-07-01 and 2022-07-02.\nI'm not inclined to add this. \"Don't do that\" seems pretty obvious to me, especially when we recommend that requests include a Date.\nWhile this advice seems fine, I agree that it's not really necessary. It also brings up more questions to me — what stops any layer below the OHTTP layer or an intermediary from arbitrarily buffering messages? Maybe just have a note that the key config that's used as well as explicit date markings can leak the time that the request was generated?\nThis seems like a reasonable way to deal with the problem. Liveness information can leak in a number of places, including through noting which public key was used. It might also leak via the date. Maybe this can become part of the differential treatment section?\nThat sounds good to me\nI think that while this text is true, I believe that it is also unnecessary. Discussion of key liveness are dealt with in the key-consistency draft, so this is really about requests that are transmitted well after they are constructed. If we start to document that, there are lots of other things that start to come up.\nI agree, but the content I'm proposing here relates to how liveness affects differential treatment. We've already noted a few key ways in which this can happen, so it seems pretty harmless to me to include one more. I don't think we need to say anything more about how to deal with liveness or how it pertains to consistency. I'll propose some text to make this more clear. No more than one sentence."} {"_id":"q-en-oblivious-http-2fd5aa307b28b28e9738d0b9f7b5bfe8fcaad5f9ab3aeccb9835024ed7fec7bd","text":"Section 5 details HTTP usage. It seems it's technically correct but the section closes with a sequence of paragraphs that jump back and forth between statements about how an Oblivious Gateway Resource treats requests or responses. Perhaps these could be consolidated to present requests first consistently. I.e. swap the final two paragraphs around"} {"_id":"q-en-oblivious-http-f0788ff541648fd591d282c456b482b0a979d4e95072ff19451e3a4a9b22d26d","text":"We debated this at some length. This defines an optional-to-use signaling method for the gateway so that it can inform the client when the key configuration is not acceptable. Clients can use this as a prompt to refresh their configuration.\n(Originally posted by NAME in URL): Any lifetime information that comes with a key is helpful, but in practice is insufficient for clients to drive key updates. Key rotation may not occur on a fixed schedule and it may not occur synchronously (some gateway instances might not get the key at the right time. This means that clients cannot distinguish between failures due to configuration mismatch or request tampering (or whatever else might cause the gateway to reply without an encapsulated response). If the client can't distinguish then the client's recovery implementation is a lot more complicated. Does it always try to forcibly update its configuration? Does it implement some sort of backoff? Can a relay abuse the client's recovery implementation to somehow put the client into a state where it is stuck with an old configuration? It seems vastly simpler to just be clear that the configuration is wrong as a signal to the client that it's time to update. This was done in ODoH with a 401 response code. proposed a new problem statement, which I think is the right way to deal with this. Yes, there are potential privacy problems based on automated updates, but those depend on the client's consistency story and is therefore not something new a signal would introduce.\nThis looks great -- thanks!"} {"_id":"q-en-oblivious-http-ba19c9ba9773c7d2771e651cd004bcd2c3328b5f241051c9a0043f49892c1a48","text":"From NAME Can we (additionally or in replacement) point to the \"HPKE KEM Identifiers\" IANA registry created from this table instead? Same for this and \"HPKE AEAD Identifiers\" registry."} {"_id":"q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944","text":"Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one."} {"_id":"q-en-oblivious-http-3895703f97207cc302790d5c19baac49d65052edd842f3fb42076a5ac5d8fe3c","text":"Corrected to mention response freshness, with a new citation Added note about timing of a retry revealing RTT Corrected figure title grammar\nNice addition on RTT timing"} {"_id":"q-en-oblivious-http-0225787543fa024ccebce5994e71de1b206ea268ef6fd6a23a592542c9ef75f6","text":"Following up from URL It seems like the key configuration lists a set of potential HPKE algorithms that the client can use. The section URL does say \"a selected combination of KDF, identified by kdfid, and AEAD, identified by aeadid\", but it's easy to miss the selected word and even then its not apparent as to what the selection is from. I think it could be made clearer that the client can choose from a list of HPKE algorithms specified in the Key Configuration."} {"_id":"q-en-oblivious-http-fe7f55ad63416ddb90e8b43556d024dadbe9e06727acbbe95e69a0dbd1458417","text":"and . Given (either assuming a fixed key ID, or looking up the KEM based on key ID), I think this is a nice simplification. It also authenticates more, even if redundant. (The kemID is already part of the HPKE key schedule.) I implemented this change and it's straightforward.\nCurrently, an encapsulated request carries a key ID which maps to a KEM key that determines the size of the encapsulated public key in a request. This means that code parsing a request needs to know the ID -> KEM mapping in order to parse things correctly, which is somewhat annoying. I propose we add KEMID to the request so that this mapping isn't needed to just parse the message. As a bonus, that would also fold in the KEMID explicitly into the AAD alongside the KDFID and AEADID.\nWe currently length-prefix it, but we don't have to. Think about this a little.\nBoth the public key and encapsulated secret are fixed length. I think making these fixed length on the wire is good. However, it does require the parsing code to know the corresponding size, which depends on the KEM. This requires there to be a function to map the kem_id to the corresponding size(s). NAME do you have such an API in your implementation? I have one in mine, which would make this pretty straightforward to implement."} {"_id":"q-en-ops-drafts-d2093e6932b04d0a8aa9091df9f2ceef5af3b678e55782d7101c8c08b35c8ebd","text":"note: when merged.\nWe should probably make it clear what the application can do here (SPA for servers; maybe intentional migration for clients?) I'm not sure if migration is transparent to client applications or not.\nThe client application might have some knowledge if it's behind a NAT or not and as such might want to indicate to use Conn ID or not. We can make that more explicit in the text.\nI did some small edits on the text in the connection migration section but NAME maybe you had something more in mind and want to propose more text?\nThis whole section should then also be moved to after the Conn ID section.\nI like the move of text to the manageability draft. If it were me, I'd move \"Server-Generated Connection ID\" as well, as I don't think apps need to worry about it, but it's not a big deal. I think adding text about server preferred address is probably more important for this draft.\nAdded some more text in\nNAME we think the current state of the draft addresses this issue; could you check and close this issue if so?"} {"_id":"q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b","text":"note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs."} {"_id":"q-en-ops-drafts-1704add8c4f3233c6e481ab51b545c12fbf6bb03539f7c2e09d5fd593ee0caa3","text":"addresses\n\"QUIC's linkability resistance ensures that a deliberate connection migration is accompanied by a change in the connection ID and necessitate that connection ID aware DDoS defense system must have the same information about connection IDs as the load balancer {{?I-D.ietf-quic-load-balancers}}. This may be complicated where mitigation and load balancing environments are logically separate.\" This is insufficient. Even the load balancer is only able to identify the end server, not the connection.\nOn a related note, the suggestion to use the \"first 8 bytes\" of a server's connection ID is somewhat strange. Why not observe the CIDL it's using in long headers, or just have common configuration?\nHow about if we just remove any wording about load balancers here? Like this maybe:\nOn the 8 bytes: I actually not sure where this comes from, however, I assume that the thinking was that using the first 8 bytes might be good enough to avoid collisions if you don't know the connection ID length. But I guess we can remove it or add some more explanation... any thoughts?\nIf you read the Retry Services section of QUIC-LB, you'll see that there is no dependence on the Connection ID encoding at all. It is a common format for Retry Tokens that allows servers to (at a minimum) extract the original CIDs necessary for verification. Certainly, no device should drop packets simply because the CID has changed, which is maybe what you're trying to say here? But I'm not sure that this is a DDoS thing.\nThe point is when a DDoS attack is on-going you want to make sure that on-going connections keep running while new connections are handled more carefully. So you have to be flow aware. However, maybe it's acceptable if connectivity breaks during a DDoS when the 5 tuple changes...? Otherwise you would need to be CID aware.\nThe VN section suggests that observers not use the version number to detect QUIC. Would it be futile to ask them to admit QUIC versions they don't recognize rather than drop them?\nProbably.\nNot sure. Depends on the function you implement. I guess there could be security functions that decide to only allow certain versions... not sure if that is justified but I don't think we can stop it.\nAny further opinions here?\nWe can stop it with greasing, I guess. IMO the draft should specify best practices that don't break extensibility and if people don't follow it it's on them. That's the approach I took in QUIC-LB.\nThe best practice is to not block any unknown traffic as blocking hinders evolution. However, I guess there might be security reasons why some will not do that. I guess allowing for certain versions of QUIC is better than blocking UDP entirely...?\n+1... I don't think that putting text in this document saying \"don't wholesale block versions of QUIC you don't understand\" will actually change the behavior of the boxes that will ship this behavior, since drop-unknown is taken as a security best practice. But I still think it's worth putting that advice in the document nonetheless, if only to have \"I told you so\" in an RFC. (I do still tend to think that we'll see only two behaviors emerge in the wild: (1) permit anything with a QUIC wire image since there's not much to grab on to there (and rely on the LB implementation to eat DoS hiding behind that wire image) and (2) block anything that smells like QUIC (because it is easy to implement in the default case and it forces fallback to TCP, which already has lots of running security monitoring code).)\nwill work up a PR for us to discuss\nIn manageability: \"Integrity Protection of the Wire Image {#wire-integrity} As soon as the cryptographic context is established, all information in the QUIC header, including information exposed in the packet header, is integrity protected. Further, information that was sent and exposed in handshake packets sent before the cryptographic context was established are validated later during the cryptographic handshake. Therefore, devices on path cannot alter any information or bits in QUIC packet headers, since alteration of header information will lead to a failed integrity check at the receiver, and can even lead to connection termination.\" This isn't precise. The Initial packet is not meaningfully integrity-protected, but the contents of the CRYPTO frame (and the connection IDs) are. An on-path attacker could, for instance, change the ClientHello's Packet Number to from '0' to '1' and then adjust the ACK frame from '1' to '0'. (Why would one do this?)\nIs the ACK frame in the case not encrypted...?\nThe Initial is decryptable by anyone, and so could be rewritten.\nOkay. What do we want to write here now?\nhow about \"URL alter any information or bits in QUIC packet headers, since alteration...\" changing to: \"URL alter any information or bits in QUIC packet headers, except specific parts of Initial packets, since alteration...\"\nWFM. I can change that as soon as your PR landed (or you add it to your PR)."} {"_id":"q-en-ops-drafts-7cabb60fae38a9366cf48434bc578cae1ff5b314653e9736bb65f524be9bda30","text":"Merged... with the neurons I have left on 22 December, this seems broadly true. I'm not sure we should be too prescriptive about load balancing in this doc, though -- in part because I suspect the challenge of handling DDoS attacks using QUIC traffic will lead to divergent innovation in the LB space in the near term...\nSo if the 5-tuple changes because of NAT, the connection ID might actually not change. So in case of DDoS you might want to look at the connection ID to at cover those cases. For DDoS you usually don't need a hard guarantee to track all flows (if you block a flow incorrectly, it has to reconnect), but where you can track the flow when the 5-tuple changes, it might actually help. That is what this paragraph is/was about.\n\"QUIC's linkability resistance ensures that a deliberate connection migration is accompanied by a change in the connection ID and necessitate that connection ID aware DDoS defense system must have the same information about connection IDs as the load balancer {{?I-D.ietf-quic-load-balancers}}. This may be complicated where mitigation and load balancing environments are logically separate.\" This is insufficient. Even the load balancer is only able to identify the end server, not the connection.\nOn a related note, the suggestion to use the \"first 8 bytes\" of a server's connection ID is somewhat strange. Why not observe the CIDL it's using in long headers, or just have common configuration?\nHow about if we just remove any wording about load balancers here? Like this maybe:\nOn the 8 bytes: I actually not sure where this comes from, however, I assume that the thinking was that using the first 8 bytes might be good enough to avoid collisions if you don't know the connection ID length. But I guess we can remove it or add some more explanation... any thoughts?\nIf you read the Retry Services section of QUIC-LB, you'll see that there is no dependence on the Connection ID encoding at all. It is a common format for Retry Tokens that allows servers to (at a minimum) extract the original CIDs necessary for verification. Certainly, no device should drop packets simply because the CID has changed, which is maybe what you're trying to say here? But I'm not sure that this is a DDoS thing.\nThe point is when a DDoS attack is on-going you want to make sure that on-going connections keep running while new connections are handled more carefully. So you have to be flow aware. However, maybe it's acceptable if connectivity breaks during a DDoS when the 5 tuple changes...? Otherwise you would need to be CID aware.\nThis PR removes the problems I cited in the issue. Ship it. I wonder if we should hit the point in the last sentence a little harder. If you want to support connection migration, there's nothing you can do in this space except the Retry Service stuff in QUIC-LB, is there?"} {"_id":"q-en-ops-drafts-ebb4ea8593cf83078a0e57eaf0b9eeaf0efafcea9e9efe8bfb2988b48b64c840","text":"This text in is incorrect. An endpoint can abruptly close (reset) egress data, but it can only request that its peer abruptly close ingress (using STOP_SENDING).\nis there a missing \"not\"? Do you mean: s/An endpoint can abruptly close (reset) egress data/An endpoint cannot abruptly close (reset) egress data/ ? Anyway, yes, we need to fix that!"} {"_id":"q-en-ops-drafts-e13897dffc7fc0b36603c029f17f9030c12f6a2129ebe98fe10d950211a44f3e","text":"Issue: “and use of the UDP port number corresponding to the TCP port already registered for the application is RECOMMENDED”. I agree with the intent, but here an RFC2119 keyword is used, but is this intended, and can this please cite the RFC that says this rather than just stating this.\nSimilarly noted in\nPR removes normative language and says \"usually ... appropriate\" instead. Is that okay? Or should the wording be stronger?"} {"_id":"q-en-ops-drafts-38ef3cfbcaf9f4e1dac88b5c507bbf1573e39a5948be4689164337e81ec28494","text":"The proposed text looks OK to me.\nThe ID said: “blocking or other interference by network elements such as firewalls” I think this language could be seen as having a little attitude that could be avoided. I suggest replacing “interference”, and instead say that this can change the forwarding policy. This also extends it to include action from QoS policies and service level agreements. “by network elements such as firewalls that rely on the port number for application identification.” I’d prefer “use” to “rely” … because they might well use other heuristics in future if they do prove more useful to classify sessions/packets.\nIs the wording in better/good?"} {"_id":"q-en-ops-drafts-14f5190dd6065ea1e1f6fd45a8fe71caf828803373ad9e8f10c847cb6eb016a6","text":"process assumes that the set of versions that a server supports is fixed. This complicates the process for deploying new QUIC versions or disabling old versions when servers operate in clusters. This functionality was removed from -transport, and appears to refer to text in draft-ietf-quic-version-negotiation, but that draft isn't referenced at all.\nI can add a reference to draft-ietf-quic-version-negotiation and make clear that this is an extension and not part of the base spec."} {"_id":"q-en-ops-drafts-edd056268a86b242f0d6f95c682614fa8e112ce789285b85f1e752dd317b2bff","text":"Some section refs weren't linked to the sections; some objects were defined multiple ways; one reference wasn't actually cited in the text.\nThanks a lot!"} {"_id":"q-en-ops-drafts-c872c54cd98c006e1ff323190d891212f0db68c9387c49f477022f947c09ff9f","text":"This paragraph is redundant with the previous paragraph, however, not sure I removed the right one now..."} {"_id":"q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769","text":"This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes."} {"_id":"q-en-ops-drafts-93c3288dc75d65e61b0a74c51928e86983020b39c6b37f5608cf42b11ea21b1d","text":"From PR : I'd also like to see something like : /In this section, we discuss those aspects of the QUIC transport protocol that/This section discusses those aspects of the QUIC transport protocol that/ /Here, we are concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, which we define as the information available in the packet/This section is concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, defined as the information available in the packet/ and /least four datagrams we'll call \"Client Initial\",/least four datagrams, labelled \"Client Initial\",/ and / 0-RTT data may also be seen in the Client Initial datagram, as shown in /When the client uses 0-RTT connection resumption, the Client Initial datagram can also carry 0-RTT data, as shown in/ and /typically around 10 packets/typically to around 10 packets/ and /of a connection after one of one endpoint/of a connection after an endpoint/ I'd prefer more like: /Though the guidance there holds, a particularly unwise behavior admits a handful of UDP packets and then makes a decision as to whether or not to filter later packets in the same connection. QUIC applications are encouraged to fail over to TCP if early packets do/ ... and I'll close PR .\n“QUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network.” Could this benefit form a cross-reference to manageability, where this topic is also discussed from a network perspective?\nThree places where the text might avoid confusion when read in future years: 1) “Further downgrades to older TLS versions than used in QUIC, which is 1.3, “ … will it always be 1.3, or should this be “in QUIC [REF] is TLS 1.3” 2) “Currently, QUIC only provides fully reliable stream transmission, which means that prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. “ I'm not so happy with currently here: Is this the best way to say this, might it be better to say this rather than even hinting that future QUIC might only be reliable: “When a QUIC endpoint provides provides fully reliable stream transmission, prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. “ 3) I’d prefer to see a ref here, since this something that the WG might later decide to work upon: “Currently QUIC only supports failover cases [Quic-Transport].”\nThat mega paragraph has a bunch of good concepts, but it's all run together into a single blob, which makes it a bit hard to follow."} {"_id":"q-en-ops-drafts-ac2fe523a1c99edc3d713b48d9448651b0b94f8cd2d01f07b16201e890fd8428","text":"Okay I made another pass on the text based on NAME comments, however, it didn't get any short but maybe it's slight more clear now. Please re-review! (Also I moved some paragraphs around to improve the text flow; so the PR now looks bigger than it is. So we also need to update the section title now, which is \"Port Selection and Application Endpoint Discovery\"?)\nApplicability Sec 8: >Note that HTTP/3's ALPN token (\"h3\") identifies not only the version of the application protocol, but also the version of QUIC >itself; this approach allows unambiguous agreement between the endpoints on the protocol stack in use. I was going to start a discussion about this at IETF 111. I think we should think hard about this; if there are a lot of versions with small differences to the application, this could result in a ridiculous cloud of RFCs.\nIs there a change to make to this document in WGLC, or is this a note to be careful in the future?\nI personally don't like this linkage, and think the ALPN should just define the application. While linking the version does smooth out version negotiation, it can result in an explosion in the ALPN registry. I'll start a thread on the list about this, since I suppose it can't wait for the discussion at 111 that I was planning.\nThe point this was discussed already at length and the manageability document is just reflecting what has been decided for h3. So if at all you'd need to raise an issue on the h3 draft but I guess that's also too late.\nI disagree that the text of the H3 draft mandates this: That is a long way from saying \"roll a new ALPN for each QUIC version\". Maybe everyone tacitly agrees we should roll a new ALPN, and I'm in the rough. But I'd like to confirm that.\nSee quicwg/base-drafts\nHow about rewording the sentence to reflect reality, roughly Since QUIC v1 (RFC9000) deferred defining a version negotiation mechanism. HTTP/3 (RFC-to-be-whatever) deferred deciding how it would map to other QUIC versions. HTTP/3 requires QUIC v1 and defines the ALPN token (\"h3\") to only apply to that version. This solves a versioning problem at the cost of future flexibility. Application protocol mappings written for QUIC v1 or other QUIC versions may be able to benefit from version negotiation mechanisms that did not exist during the development of HTTP/3 (RFC-to-be-whatever). Coupling QUIC version to ALPN tokens is one approach but not the only approach.\nI believe the text already says that this doesn't have to be applied by other application. The cited text is really only meant to described what was decided from h3 as an example. We can definitely reword this for reflect the consensus more clearly as proposed by Lucas.\nLucas, I would welcome your proposal as kicking the can down the road and not enshrining an approach that I think is bad. I would also like to continue the ALPN/versioning discussion, but it doesn't have to block the applicability draft.\nI can make it shorter. Maybe not as short as NAME would have it, but still shorter :)"} {"_id":"q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5","text":"hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN"} {"_id":"q-en-ops-drafts-93f72ab78c71d77c35a7574ab92fd94cec2c8bb35f98fdebb0061742adbf0378","text":"URL , can we be more specific here about the control signal we are talking about is QUIC protocol control signal not really other kind of control signal that the network might have? also here it is compared with TCP, should it also mention TCP with TLS?\nI added the term \"transport-layer\" in PR . Does that help. I don't think TLS needs to be mentioned here."} {"_id":"q-en-ops-drafts-c0c5df788bf6a08db63818d4a842e129836621d2ed065e2fe0c6577f8bc374a4","text":"hi NAME NAME NAME PTAnotherL, reviewed the thread and I do think an additional 9065 ref here would be helpful."} {"_id":"q-en-ops-drafts-69ee0dce4b3856f34254d98c829f756309a6dac682aa4260eba2376416f723c4","text":"Could the first 1200-byte QUIC packet be used to suspect a QUIC connection establishment ? [MK] Yes, I think if you look at the first packet as a whole you can be reasonably sure that you look at a quic packet. I guess this is an assumption throughout the whole document, as section 2.4 says: \"New QUIC connections are established using a handshake, which is distinguishable on the wire and contains some information that can be passively observed.\" [MK] Do we need to say more? EV> hummm I guess not even if the text in 2.4 is rather generic while the point in 3.1 is really specific. Perhaps adding which handshake properties are 'distinguishable' already in section 2.4 ?\nproposes to fix this with a forward reference."} {"_id":"q-en-ops-drafts-b9d12fccbf3e277e6fc0b03944a7164422502b5d0c890628ebae0f9b16b99bb9","text":"\"Here we assume a bidirectional observer\", let's also state that there is no section about unidirectional observer ?\nNot sure if we add something because I think it okay to state assumptions...\n\"unless noted\" states that we'll note when something is unidirectionally observable, I'm reviewing the language now to see if we missed any..."} {"_id":"q-en-ops-drafts-3794fe95a854c2179a6ea732dd396779ddfc438413854b02ac7ea99bed04f701","text":"Tries to I resisted the urge to reference the QUIC-LB draft, which isn't adopted. The major substantive change is that I removed the prohibition on non-routing information in the CID. I'm happy to discuss this if people have object.\nLGTM. will go ahead and land this, then fix up the lint issues.\nNAME missed this during the inconsistency, will fix in working copy\n(this is done)\nBTW, this was meant to address\nMoving URL to the ops repo.\nsee also -- in any case the text that is there now is not great.\nNAME can you please review the existing text (orginially provided by NAME and state what else is needed? Please have also a look at issue\nThis is a bit OBE, but I think there might still be value in discussing how a server implementer should think about connection IDs. Fortunately, NAME has been doing a bunch of work on this in his load balancer draft. I would suggest that we leave this issue open, and either point to the LB draft or use text from it.\nfixed this\nWrite section to provide guidance; maybe in the same section than guidance in port selection in general"} {"_id":"q-en-ops-drafts-edd515d6b30f8fb507ae16bf79205ba4a77779b0b8194faf6fa59514ef9fecc5","text":"Address .\nThe section on \"Passive network performance measurement and troubleshooting\" is probably a bit too pessimistic. It's probably worth pointing out that one part of network troubleshooting -- \"find out which box to blame\" -- is substantially easier. Thanks to integrity protection, most middleboxes, can drop, delay/rate limit, and corrupt, and that's about it. These are fairly easy behaviors to diagnose. I've had some conversations with customers afraid of not having their TCP headers anymore, and I believe this fear is largely misguided. If people agree this belongs in the draft, I can file a PR.\nI agree the wording is very pessimistic. However, I don't think your observation belongs in this section because it's about measurement of performance metrics. There is a sentence in the introduction: \"Given QUIC is an end-to-end transport protocol, all information in the protocol header, even that which can be inspected, is is not meant to be mutable by the network, and is therefore integrity-protected to the extent possible.\" Maybe it makes sense to add another sentence right there to also emphasis the positive effect of that, like: \"While more limited information compared to TCP is visible to the network, integrity protection can also make troubleshooting easier as none of the nodes on the network path can modify the transport layer information.\"\nLGTM, will fix Jana's suggestion manually.One nit, but LGTM. The nit is to avoid ambiguity that might arise from \"transport\" -- one might read that QUIC integrity-protects UDP headers as well, since those are technically transport."} {"_id":"q-en-ops-drafts-6c0914863c12431ef23782dc6468ff4fcabe2f4952897e6311902d9561ec87d2","text":"NAME -- you mentioned wanting to have a more balanced view of zero RTT in the applicability draft: is this something like what you had in mind?\nI'll go ahead and merge this to have one less editor's note in the ietf-00 submission; file an issue if this doesn't address the \"thinking in 0rtt\" point adequately.\nJana noted at the interim in Paris that we should point out that applications need to be re-thought slightly to get the benefits of zero RTT. Add a little text to discuss this and why it's worth the effort, before we go straight into the dragons."} {"_id":"q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447","text":"Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation)."} {"_id":"q-en-oscore-48348007ea27481366678702a007b77a06370f5199e6248866f2184624d01805","text":"This prescribes explicitly one of the possible ways that the response is sent, and states the (hopefully) obvious about accepting new requests. The alternative to \"MUST NOT be encrypted\" would be \"MUST be encrypted and MUST carry an explicit IV\"; even both would work, but clients need to know what they need to be prepared for. I'd be fine with either of the two (or three) versions of the request encryption, but not stating anything might result in incompatible implementations where one only works this and one only works that way. Being explicit about (to us) obvious things like not responding with an implied IV or not accepting Repeat as Class U (or in an unencrypted request that might then clear the way for any replays) might not be necessary from a specification point of view, but could avoid implementation errors I see very vividly before my inner eye.\nCome to think of it (from the considerations in URL), we have to go the other way and say that the response \"MUST be encrypted, and MUST use an explicit partial IV\". As we want to use this for HTTP too, this is by far the easier option; otherwise, we'd have to additionally specify how the option is transported in HTTP, so a HC proxy can convert it, and then the HC proxy would need to know whether it's transporting OSCORE (so it'd forward the Repeat option for the HTTP client to include it in a new encrypted request) or whether it's working for an OSCORE-unaware client that would not be equipped to answer the Repeat challenge (so it'd repeat the known fresh (because HTTP) request with a Repeat option by itself without bothering the client).\nDid I understand right, that you take back the proposal in the PR? Partial IV generated by the server is allowed in any response. With the resolution of issue : \"if Partial IV present in the message -> use that Partial IV\" Hence the replay draft can specify that replay syncing MUST use the server generated PIV, in which case it will be encrypted. I probably missed the issue?\nYes; I'll still need to update the PR to say the desired. We don't need to change anything about mechanism, but we should be explicit that the server MUST reply with an encrypted Repeat and a generated partial IV.\nSorry, still not clear: OSCORE now allows the use of server-generated Partial IV. The processing for Repeat is defined in the RRT-draft. Why does anything else need to change in the OSCORE draft?\nWrote the promised changes as it's much easier to talk with concrete text; the PR is now updated as 81d47927. This doesn't change any mechanism (at least not in implementations that are based on the current text and written in full awareness of all the relevant considerations). It is just a text change that prescribes what will be obvious to some implementors but not to others. (Especially since it's not so obvious as that I would have gotten it right in the first attempt.)\n(This issue takes some context from issue ). The current draft states that replay protection violation triggers a 5.03 with Max-Age 0 (Section 6.4). While this is a practical solution to issue (which is not mitigated by encrypting the code because FETCH is still safe), it could still be caught by a proxy that'd try to be smart about it (and would, AFAICT, be within its rights). I suggest that the response code be 4.01 Unauthorized with Max-Age=0. The semantics of Unauthorized are \"SHOULD NOT repeat the request without first improving its authentication status to the server\", and while it's a bit of a stretch to say that increasing the sequence number is sufficient to improve the authentication status, the Max-Age gives a good hint (\"Come on, try again right away, don't look for other keys\"), and clients can use the presence of the Max-Age option as a discriminator to tell \"Sorry, no such context\" apart from \"Retry with a new seqno\".\nWe also just identified this response code as not appropriate. The proposal is good, unless it results in overloading the semantics of Max-Age=0. There is already at least one requirement when Max-Age has to be zero: RFC7252, Section 5.6.1: \"If an origin server wishes to prevent caching, it MUST explicitly include a Max-Age Option with a value of zero seconds.\" Is there a setting where the the server would not want the response 4.01 to be cached even though it is not a replay? (Obviously we have to differentiate between Inner and Outer 4.01, but is that enough?)\nI don't know. I briefly thought about using an Echo here, but an HTTP proxy would react to that as a CoAP proxy would to 5.03,Max-Age=0. (See comments 2ff). I did give that some thought, and considering the Echo option thought about whether the server should just reply with an encrypted ,Max-Age=0 (or Echo=short). I decided against suggesting that because responding to a possibly fraudulent message with an owned partial IV means that any adversary that has one valid-but-used message can thus drive any server into partial-IV exhaustion.\nBy the way, just FYI, that Max-Age=0 with 5.03 means \"service unavailable, retry in 0 seconds\", which is wrong (and not what we want). (see: URL)\nThe current text where there MAY be a Max-Age of 0 both on \"4.01, Security context not found\" and on \"4.01, Security context not found\" means that a client can not know any more which is the case (and it should not look into the message). I see the reason for the MAY set Max-Age for most error cases in , but a client needs to keep those apart (otherwise it'd need to so something like try 3 times with a new PIV, and if the server still responds the same, assume that it doesn't know the KID). I suggest that for those two responses, the Max-Age values are enforced as \"MUST NOT set a Max-Age option\" and \"MUST set the Max-Age option to 0\", or find any other discriminator, but an implementation should be able to tell them apart.\nGranted, with the current setup of POST and FETCH there are much fewer cases where replay protection errors can happen (they'd only legally happen under particular package loss situations when starting an observation via a proxy that does not support observation but does do optimizations for safe requests), but I still think that there'd be more clarity if there were a discriminant between the two error cases.\nThe document, as of now, specifies that Partial IV SHOULD NOT be present in responses without observe. (This refer to: https://core-URL ) The processing for protecting and verifying responses, as of now, specifies that: if Observe -> use the Partial IV in the response if not Observe -> use Partial IV of the request Should this part be changed to: if Partial IV present in the message -> use that Partial IV if not present -> use Partial IV of the request\nNeed to read text, but this reflects what I wanted to see.\nSorry NAME when you say \"this\" do you mean That's not currently included (so no text about it yet). The text right now is:\nI was talking about the should change to\nSection 5 - You have defined a 'sid' and said that it could be placed in the unprotected field, however you have also said that the unprotected field must be empty. These statements are contradictory\nAs per Christian Amsüss' message from 2017-01-05 there still seems to be confusing text in that section.\nAnswering both to NAME and NAME (URL) The point here is: when defining the 'sid' as a new COSE Header parameter, there is no reason related to security that would make this a \"MUST be placed in the protected bucket\", so instead it is a MAY. But when used in OSCOAP, the 'sid' is used in a way that it MUST be in the protected part, so the unprotected SHALL be empty. The 'sid' is analogous to 'kid' from this point of view, it is defined as a COSE parameter that MAY be unprotected, but is used protected.\nMulticast removed in branch v-02 Hello OSCOAP authors, some more questions came up: 5.2 gives the context for the Encstructure as \"Encrypted\", while cose-msg-24 would set a \"Encrypt0\". Why does that differ here? states that the '\"unprotected\" field [...] SHALL be empty', but above states that sid 'can be placed in the unprotected headers bucket'. How do those statements go together? How are sequence numbers (which are sorted integers) mapped to partial IVs? Which endianness is used? (The padding direction follows from cose-msg-24 3.1). I'd assume (and implement) that big-endian is used because it aligns naturally with COSE's left-padding, but as I see it, that does not strictly follow from the draft. I concluded that the tag is always the last bytes of the ciphertext from example A.4, but did not find text in the spec to support that; what made me especially unsure about this is that the COSEEncrypt0 definition of cose-msg-24 5.2 only mentions the ciphertext there, which may be nil. Best regards Christian To use raw power is to make yourself infinitely vulnerable to greater powers. -- Bene Gesserit axiom"} {"_id":"q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5","text":"This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case."} {"_id":"q-en-oscore-29f2afd352ae630214b6c786f7f6f236d511e1eeb16711e593175ba997098801","text":"Solves issue (from review comments by NAME : URL)\nSection 4.1.3.1 - I am unclear why an OSCORE error response can be cached by an OSCORE unaware intermediate would be cached, but a success message is never going to be cached. Based on this, I don't plan to set an outer Max-Age option.\nClosed in 84403c4c4 Section 4.1.3.1 - I am unclear why an OSCORE error response can be cached by an OSCORE unaware intermediate would be cached, but a success message is never going to be cached. Based on this, I don't plan to set an outer Max-Age option. In section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field. Section 6.1 - Is a registry needed for the leading byte of compression? Behavior if bits 0, 1, or 2 is set in the flags byte on decode? Section C - given the way that my system is implemented, it would be nice if the outputs included the first full IV to be used for both the sender and the recipient. That would allow for a test that the combination of ids and common ivs is done correctly. In my case I do not have the shared IV available for testing as I immediately or in the id."} {"_id":"q-en-oscore-198219c28f57ff3791b14c32649bee22dc97c63992cbe6e2c77da88ef0c1dd83","text":"Solves issue (from review comments by NAME : URL)\nIn section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field.\nAnother case to make sure that works Client - please cancel my observation Server - I don't have an observation for you, but sure I am willing to tell you I canceled it. What request PIV would I use in the AAD?\nYes, the verification of cancellation would fail since the server does not have the initial registration's partial iv saved, if it does not have the observation stored. So OSCORE would fail, which is not right. We have reverted to previous version (request_piv is not exception for cancellations) in c94314321 , but we want to add some security considerations about Observe and cancellations before closing this issue. Section 4.1.3.1 - I am unclear why an OSCORE error response can be cached by an OSCORE unaware intermediate would be cached, but a success message is never going to be cached. Based on this, I don't plan to set an outer Max-Age option. In section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field. Section 6.1 - Is a registry needed for the leading byte of compression? Behavior if bits 0, 1, or 2 is set in the flags byte on decode? Section C - given the way that my system is implemented, it would be nice if the outputs included the first full IV to be used for both the sender and the recipient. That would allow for a test that the combination of ids and common ivs is done correctly. In my case I do not have the shared IV available for testing as I immediately or in the id."} {"_id":"q-en-oscore-d22e6b16aecba289d7fca2889b7fbf69850886d308d2aae0382bfa28e441e35d","text":"UTF-8 über alles.\nAbstract: Explain that OSCOAP works across intermediaries and over any layer.Intro: Explain that OSCOAP works for Observe and Blockwise.Intro: Move details on replay and freshness to some other section.Terminology: Explain the references and add COSE."} {"_id":"q-en-oscore-d9392865f83b146eed23a78309d3b2fac378ce16f97156af77727b9dd1dc16e0","text":", and\nThis goes against Jim's wish to have things in bits rather than bytes... But HKDF uses octets and to combine the two parameters as wanted by Ludwig, they need the same unit.\nFixed in branch v-02\nOSCOAP refers to HKDF [RFC5869], but it not clearly defined how the parameters map to HKDF-Extract and HKDF-Expand. E.g. IKM. Salt should be made optional.\nIn section 3.2.1. outlen is defined as \"the key/IV size of the AEAD algorithm\" and outputlength as \"is the size of the AEAD key/IV in bytes encoded as an 8-bit unsigned integer\" Isn't that essentially the same?\nI just created a v-02 branch for the next update. I'll make an update as soon as possible trying to address this.\nFixed in brach v-02"} {"_id":"q-en-oscore-cacfb7688a8219a014904c4adbcf07b2a2f9565c3d165976d5ca628693268181","text":"This Partly addresses\nClarify that maximum sequence number is algorithm dependant and define a formula for it.\nMay be simpler to just have a single structure for the externalaad. externalaad = [ ver : uint, code : uint, alg : int, id : bstr, seq : bstr, ? tag-previous-block : bstr ]\nthe \"context\" parameter, which has value \"Encrypted\" This should be \"Encrypt0\", but even better OSCOAP should just point to draft-ietf-cose-msg that already specifies this.\n-00 Section 3.2.3 - You are getting ahead of yourself. The only way that the identifiers would not be unique would be if the sender and recipient collided. I do not know that this would be a problem and therefore talking about it makes no sense. I am assuming that this is supposed to have something to do with the multicast work that I have not looked at. If so it does not belong here but there.\nAgree. I'll delete all text on group communication from the document.\nMulticast removed in branch v-02"} {"_id":"q-en-perc-wg-9ba2da4f3a25c6c7101effd8b6b56464075081269819b67d4e84e5a9e1205a0b","text":"This is an important security consideration for the re-encryption case, which doesn't appear to be covered by the current text.\nSalt could be the same, but key definitely needs to be different. Requiring both to be different is best, of course."} {"_id":"q-en-qlog-b8bf5c0b4135c08724e5fd26e0662042211a45420b79dde67d36566a129d953d","text":"First, these documents can now refer to the published QUIC RFCs. While doing that, let's just replace all [REF] with {{REF}}, which works better for our tooling. Lastly, lets brush up the cross references between documents a bit.\nI checked this locally and everything was good. It's easy enough to fix up later if I did break something.\nqlog alreaady refers to the -34 drafts. Nominally this change should be easy. Although some section references might need double checking.\nlgtm, assuming that the build tool correctly sets the links to the RFCs"} {"_id":"q-en-quic-bit-grease-babd34006ab1b63b34708bf9cc55aeaff7f97e790310a3e73002a836bd7a9383","text":"Inappropriate use of MAY, lack of MUST [NOT]. Thanks to Russ Housley for schooling me on the correct usage."} {"_id":"q-en-quic-v2-7ae8850df6d2e0a2974348e958caf4e97c6edb3201b579dcdacb4d4db488fc98","text":"This uses kramdown tricks to make real links for the section references to RFC 9001 and uses a nicer label for those."} {"_id":"q-en-quic-v2-4633fb110ba71d3edccfa267e43f26923c6a44a78ca2e83876d5b154ea7328b2","text":", , , , , .\nJust a style thing (and something we should try an capture in a QUIC style guide).\nThere a bit of inconsistency through the document. Personally I prefer the QUIC version N style.\nis there a missing word here?\nNope, extra word, thanks\nI find this awkward to read in isolation, it requires converting the RFC 9000 types definition format into a different encoding to realize there is a change. A clearer alternative could be \"All version 2 Long Packet Types are different: Initial: 0b01 0-RTT: 0b10 Handshake 0b11 Retry: 0b00\nStarting with This might be a matter of style but to me the term \"changes\" imply that you're modifying the original, not creating something new. How about we term these as differences instead? The the opening paragraph of Section 3 could be something like \"QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in ], ], and ]. However, the following differences apply in version 2.\"\nThis might have been important earlier in the document lifetime but it seems to stick out now.\nThis could be misread to imply the salt is universally known and universally applicable, which I don't think is your intention. In fact, I find the construction of this entire paragraph a bit awkward. Can we rephrase this for better clarity? Something like \"In QUIC version 1, Initial packets are encrypted with the version-specific salt as described in Section 5.2 of [QUIC-TLS]. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Clent Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the 1 key derivation and packet formats.\"\nCouple of nits but otherwise LGTM"} {"_id":"q-en-quic-v2-2ad73ac24e3f22a47122f28c461f4ac704bed18c963bde3f4648bc7fdd2bcf0f","text":"Though this would likely be monstrously inadvisable, I am fairly confident that I could construct an interoperable (or at least deployable) extension to QUIC that works in QUICv1 but not QUICv2 or vice versa. It's OK to establish expectations, but some softening of the statement might help.\nThis seems very accurate"} {"_id":"q-en-quic-v2-756efdefd2644a83da1168a6540f4e06ac2e669e7df50ddbdc75f2c826c56f53","text":"here (URL) it is better that we call it QUIC version 1 not just QUIC. As later it is referred as QUIC version 1.here (URL) I would add reference to VN draft instead of calling it \"that document\"here (URL) i would add \"packet\" after \"Initial\" just to be clear, as we have now a initial version in VN draft.\nThanks, fixed here URL"} {"_id":"q-en-quic-v2-d5f271f47502418c07fc9c2221ce35f8d25c78bf563460ecafdcf730e61f3cea","text":"As requested, all new version number, salts, etc. for QUICv2. Verification of the test vectors would be especially valuable.\nWoof. Thank you very much for quickly checking, and sorry I botched every part of that. All fixed now, I hope.\nThe new test vectors works well with my QUIC implementation in Haskell.\nThanks Kazu!\nWFM. Not validated with a real build, but a script. See URL for the script that I used."} {"_id":"q-en-quicwg-base-drafts-7e44aed3927f10f7a5a79e04a1d040efb9e7050903e4ee0590205316aa2fddfe","text":"A dynamic table entry that has not been acknowledged by the decoder must not be evicted by the encoder, even if there has never been any references to it. This is already alluded to in the current spec at URL by the phrase \"acknowledged by the decoder\", but since \"acknowledged\" is used interchangeably for acknowledging the insertion instruction and acknowledging references, a reader could understand this to mean \"all references acknowledged\". In addition, at three other places eviction of entries with unacknowledged references are banned, with unambigously no ban on eviction of unacknowledged entries with no references. I argue that more explicit language would be beneficial. This issue has already been discussed at URL and below. Note that avoiding this kind of corruption could also be achieved by banning an encoder from inserting entries that it does not immediately reference, but this is probably way too restrictive. For example, an encoder might wish to achieve best latency and speculatively improve future compression ratio at the expense of potentially wasting current bandwidth by encoding header fields as literals while inserting them at the same time for potiential future use. This is already alluded to in the spec at multiple places. Below is a specific scenario when this issue might cause corruption: Suppose MaxEntries is 5. Suppose that there is a fresh connection, with an empty dynamic table, and there is a header list to be sent that has two header fields which fit in the dynamic table together. Suppose one encoder implementation inserts these two header fields into the dynamic table, and encodes the header list using two Indexed Header Field instructions, referring to absolute indices 1 and 2. This header block will have an abstract Largest Reference value of 2, which is encoded as 3 on the wire. A spec compliant decoder should have no difficultly correctly decoding this header block. Suppose another encoder implementation starts by adding 10 unique entries, all different from the two header fields in the header list to be sent, to the dynamic table, but does not emit references to any of them. Of course in the end only the last couple will stay in the dynamic table, the rest will be evicted. This can be considered spec compliant depending on how one interprets the spec, see above. Suppose that after this, this encoder implementation encodes our header list by inserting both header fields into the dynamic table, with absolute indices 11 and 12, and using two Indexed Header Field instructions on the request stream. The abstract Largest Reference value is 12, which is encoded as 3 on the wire since 2 * MaxEntries == 10. Now suppose that out of the 12 instructions on the encoder stream, only the first two arrive before all the request stream data are received. The remaining 10 instructions on the encoder stream are delayed on the wire. Now a spec compliant decoder sees the same header block on the request stream, bit by bit, as above, and two insertions on the encoder stream. It will emit dynamic table entires 1 and 2, which are different from the encoded header list.\nYou should really create a new issue for these things. Because this comment relates only to the issue, not the proposed fix. I haven't read the proposed changes, though I will if you think that there is a fix in there that is worth keeping after this. Anyhow, the example is invalid here: Because: And you can only ever fit 5 entries in that table.\nThat's exactly the issue: \"it has first been acknowledged by the decoder\" is ambigous. \"Acknowledged\" is being used throughout the text with two different meanings, most often to refer to references, not insertions. The ban on evicting an entry the insertion of which has not been acknowledged, that you are quoting, only occurs once out of four different locations talking about when an entry can be evicted. The other three places clearly only ban evicting an entry with unacknowledged references (note the use of the same word), giving the impression that acknowledgement of the insertion is not necessary before eviction. Since the design is correct if you read it right, but wording is not as clear as I think it should be, I consider this issue editorial. The change I propose in this PR is meant to be an improvement on clarity.\nOK, having read the changes, I agree with them. They consolidate the fix by creating two conditions that cause an entry to be \"blocking\".\nRebased on . Please let me know if anybody can come up with a term better than \"blocking\" which is confusing given that streams can be \"blocked\".\nNAME Thank you for your feedback.\nThanks for this. I don't have a better name for blocking entries right now, but I will keep thinking on it as I agree it could confuse people even more.\nThis looks reasonable."} {"_id":"q-en-quicwg-base-drafts-ab74dcae1c0cc04e8d1f2625abf403c70d74e01db1c6fadc8aaace9bca92f56c","text":"Not sure if this is enough, but it enacts Brad's suggested fix.\nYup, that's what I had in mind.\nWhy is this limited to being prior to the push stream being created, given that you specify how the server handles it if it has created the stream.\nI don't read this as saying that you have to avoid it. The point is that you also have to reset/STOPSENDING on the stream if it exists, so there is no real point to using CANCELPUSH in addition.\nSounds like the intent is that CANCELPUSH should be used between the client receiving a PUSHPROMISE and before receiving the stream (i.e. when it is ambiguous as to whether the stream has been created by the server from the clients perspective). I think replacing \"created\" with \"received\" would make this more clear."} {"_id":"q-en-quicwg-base-drafts-72fb8f58807d57ff4aa3db7731e78827100656af3e2ad2dd48e5326445ea1236","text":"Follow-up from\nConsider the following sequence of packets being sent In our old loss recovery code (before unifying TLP and RTO, ), we used the following logic for determining if an RTO was not spurious, which would then lead to collapse of the congestion window: In the example above would be packet 14. This means that to count as a verified RTO, an ACK must acknowledge 15 or 16, but not any packets smaller or equal than 14. With , we now collapse the congestion window when is larger than 2 and we receive an ACK which leads us to declare any packet lost. In the example above this means any ACK received after sending packet 15 which declares a packet lost would suffice. Even worse, this logic already triggers if we receive the ACK immediately after sending 15, i.e. when the peer didn't even have a chance to receive it. This seems a lot more aggressive than what we had before before. Is this intentional?\nThis is unintentional. I belIeve Jana is intending to fix this in PR , so please take a look at that.\nIf I understand correctly, is a fix for and only makes the description of the constant and the pseudo code consistent.\nI believe the text is now correct, but the pseudocode is still incorrect in the way you describe. I think we should introduce a largestsentbefore_congestion variable as a replacement, but I'm open to other solutions.\nI'm removing design. I consider this a bug fix, but still very important.\nSee which I believe fixes the text.\nI've been using to mark things that will result in changes that affect implementations. This would seem to fit that description. Whether you consider it worth discussing is different. If it isn't worth discussing, maybe it will just be fixed by and everything will be fine without discussion in Tokyo. WIthout the label, this won't make the change log, and that might be a problem.\nFair point, I think fixes the text, but I'll re-label this as design to ensure it's included in the changelogs.\nThe spurious RTO detection calls an RTO spurious if any unacked packets sent prior to the RTO are acked after the RTO. The implication here is that any packet that was received but not acked can cause the RTO to be labeled spurious, even if there were more packets that were in fact lost. This is not always right. Consider the following example: Packets 5, 6, 7 are sent Packet 5 is received, but 6 and 7 are dropped Ack of 5 is dropped TLPs (packets 8 and 9) are sent and dropped RTO (packet 10) is sent and received ACK acks 10 and 5 Since 5 was sent prior to RTO, RTO is marked spurious. A crisp definition of spurious RTO may be useful to continue. Here's what I think is sensible: A spurious RTO is an unnecessary RTO. If the RTO timer had not gone off, things would have been fine. Based on this definition, clearly the RTO was necessary in the example, and therefore not spurious. We need to change the mechanism for detecting spurious RTOs, and I'll propose the following. An RTO is spurious if an ack is received after the RTO fires that acks only packets sent prior to the RTO. In other words, RTO is spurious if ack is received after the RTO, and (instead of , as the draft currently states).\nThis was recently changed because Lars ran into an issue() where the RTO'd packet was acknowledged for the first time in the same ACK as the pre-RTO packets. In that case, I think it was due to an overly aggressive MinRTO and some pretty coarse grained timers, but I think we want to declare that RTO spurious, since everything would have been fine if it hadn't been sent?\nNAME : does not change the semantics substantively of what was in the draft. The intent was always to declare spurious RTO if (packet prior to RTO was newly acked). I'm proposing that we change that to (no packet sent after RTO was newly acked).\nIn light of , I think this is OBE?\nYes, this is OBE. Closing."} {"_id":"q-en-quicwg-base-drafts-a555c536b8bfcfbcb382ba3791ea40eb1b9b801f60f3fbe662033260a76e24b9","text":"This keeps things simple.\nI have no idea how that happened. Forced it.\nAre we ok with a server sending both a VN packet and a Retry in response to a single UDP datagram?\nNAME Does that happen? IIUC VN is sent only when there's a version mismatch. Retry is sent only when there's no version mismatch.\nNAME A client could coalesce two packets of different versions. As I've said on the issue, I still feel uneasy with the fact that a client could coalesce around 70 packets into a single datagram, and I'd prefer more rigid rules to solve this problem.\nNAME I'd argue that a client can't do that, because the draft states \"Senders MUST NOT coalesce QUIC packets for different connections into a single UDP datagram\", OTOH I agree that there's nothing that requires the server to drop following QUIC packets with different versions within a combined datagram. As I stated on URL, I (still) think the approach in this PR would require having minimal state. Would you mind elaborating (possibly on the issue) why you think differently?\nSure, the issue here comes from multi-threaded processing, and reusing packet buffers. I have one thread (or rather, one go-routine) that reads UDP datagrams from the network, splits the coalesced packets and parses the unencrypted part of the header, and then passes the QUIC packets to their respective connections (which are running in a separate go-routine each). In order to reduce allocations, I reuse the buffers I read my packets into. When handling coalesced packets, this means I have to wait until all coalesced parts have been processed before reusing the buffer. Therefore, I can't just parse the first coalesced packet, pass it on for handling in another threat, and then continue parsing. This would create a race condition, since the handling of the packet might be executed before the parsing is finished, and the packet buffer would be returned already. So I have to split the coalesced packets first, and then handle them. This wouldn't be a problem at all if a coalesced packet just had a few parts, as it's supposed to be. But since we're maximizing flexibility for no reason, an attacker might send me datagram consisting of around 70 packets, which I would all have to completely parse.\nNAME Thank you for elaborating. I think that we should be dispatching each UDP datagram (instead of each QUIC packet) to the respective connections. That's how load balancers are expected to work, and what you are implementing is essentially a load balancer. The benefit of dispatching at the datagram level is that you'd have less things to do on the receiving thread. I understand your point on flexibility. However, in my view, the flexibility is not only how you coalesce, but the redundancy in all the long header fields. We have deliberately chosen the design because it is clear and easy to analyze (especially in regard to encryption). My preference goes to retaining the clarity by avoiding introducing restrictions.\nI think this PR addresses the amplification problem directly, by tying the number of response datagrams to the number of incoming datagrams, so I am in favor of this PR as a spec solution. NAME : quic-go can still choose to drop datagrams with a large (> 3?) coalesced packets, which might help you (slightly). I agree with NAME also that since all coalesced packets are supposed to go to the same connection, the implementation would be better aligned if it were to pass the datagram along to the connection instead of the individual packets... but that's an impl choice, and it's certainly not the only way to do this.\nI'm merging this. Please reopen the issue if you think this ought to be discussed further, especially in Tokyo."} {"_id":"q-en-quicwg-base-drafts-897877c5f73741229135d8563a16a7a0dc2f53be4c81904d574433adf650a953","text":"So a friendly pointer to detail that could be easily misinterpreted or missed? It seems like a table footnote is required but I'm not sure how to do that in Markdown.\nWhile each frame's description indicates the stream type(s) on which it is allowed, there isn't one unified place to indicate which frames are expected on which streams. QPACK subdivides the instruction space per stream, which has the nice property that you can't put something on the wrong stream. While that's overkill for H3 (too dramatic a departure from H2, multiple IANA registries, etc.), having a table to indicate which frames are permitted on which streams would be helpful.\nI like the table idea for both. Want a PR?\nText always welcome.\nI'll submit two PRs in next day or so.\nMinor nit; otherwise good. Given that we have two frames that fall into this camp, might it be worth doing something like \"(*) = only as the first frame\"?"} {"_id":"q-en-quicwg-base-drafts-e00da513e2a64d8c768d76d1ae8374eb0046d1423aa163adac53763adb43ab69","text":"This is my stab at adding the indication discussed in .\nLgtm. An minor Improvement might be to highlight in the new sentence that \"specific guidance is provided in the relevant section\" but that might be obvious. I'm happy without anything further."} {"_id":"q-en-quicwg-base-drafts-387ec9d11597a0313cdf25c3ab03ac9d5552d1b4ca050b348b94da9ef239f207","text":"What data can and cannot be retried safely?\nThat's not something that QUIC knows. HTTP does though. As advice, I don't think that this needs much more expansion, but I'd be happy to take a PR.\nI'm not so sure this is a HTTP-only issue. Unlike TLS 1.3, there is QUIC data being sent under encryption, and that might have side effects.\nLGTM, one nit. (\"using applications\" is fine, it's just a bit awkward.)"} {"_id":"q-en-quicwg-base-drafts-ff486ab8f9d3c87f4d77eae4fb1a1680690c566984dedd778070be359ac936c0","text":"This change does two things: Clarify that if UDP datagram contains Initial packets, it must be at least 1200 bytes in size; and Rule out padding the UDP datagram with junk (the \"padding packets\" part) as NAME suggested earlier. Issue is only about (1). Not that I am against (2), but this is just so that we are clear.\nI think that the reinterpretation of the text here has introduced the second problem NAME mentions. This also has the unfortunate consequence of disallowing small acknowledgment datagrams (for HelloRetryRequest in particular). This might need to be more carefully phrased.\nAs NAME said, this is an editorial change, so I'm going to merge. The substantive one is in NAME\nNAME -- you're right, but I think this was unintentional earlier. I think if we want to make a case for allowing junk, that should be discussed separately. The proposed text captures what I believe our intent was.\nstates: This seemingly says that the first UDP datagram that has two coalesced packets, Initial and 0-RTT, can be smaller than 1200 bytes. I don't believe that's the intention.\nAgreed, I believe the \"only\" needs to removed unless I'm missing some other intent.\nI don't think that is sufficient. The intent of this was to allow the client to send Initial packets that acknowledge server Initial packets without padding to 1200.\nGiven PADDING is supposed to count towards bytes in flight, but it's not ack-eliciting, a special case for ACK-only packets makes sense. I'd be inclined to specify ACK-only packets as the exception, opposed to basing it on receiving an ACK of Handshake packets or some other mechanism.\nAs stated in URL, I prefer to stop requiring the use of full-sized packets once the client observes a response from server. Reasons: QUIC stacks would have the logic to change rules based on if the peer's address has been validated due to the 3-packet rule on the server side. It's just about using the same logic for padding on the client side. Implementations that prefer to pad all the Initial packets it sends can do so anyways. Regardless of what we decide, the server side logic would be indifferent; i.e. create new connection context only when observing a full-sized Initial packet.\ntl;dr: we should always pad to 1200 for UDP datagrams that contain Initial packets. The reason we pad to 1200 is to provide a server with plenty of inbound bytes so that it can send outbound bytes prior to path validation. Padding to 1200 has sub-optimal performance in certain cases. In particular, it makes datagrams containing a simple ACK enormous. So the original thought was to try to find a case where we could allow for smaller datagrams to be sent. I think that in both alternative designs we have a problem with the interaction with our DoS defense. And while those problems might only exist in the abstract for now, I think that the best answer is to pad to 1200 always. (I will suggest rewording the text separately.) An ACK-only Initial packet is sent at the tail of the Initial phase. That is the obvious candidate for consideration here. But there are other cases where small Initial packets might be sent. The looser requirement NAME suggests is less than ideal for HelloRetryRequest. A client can receive a HelloRetryRequest and be forced to send another ClientHello. We can't assume here that the server is using the HelloRetryRequest to validate the client address (though it certainly can). If the ClientHello message is small, the number of bytes that a server can send is still limited. If the client doesn't pad, the server can still count the original ClientHello toward its budget, but it doesn't get significant additional budget from the client. And that budget doesn't increase when packets are lost. So it could be that the server is stuck with 3x whatever it gets in ACKs, which is not a lot. It's worse if you consider the possibility of the ServerHello being enormous (hello PQ crypto). At this point, the server won't have validated the client address and so is still limited in what it can send. Ideally, we want the packets to be big so that the server gets more budget for sending and isn't limited to sending tiny packets. But if all the client sends is small ACK-only packets (under either proposed amendment, either what NAME or NAME describe), the increased allowance is very small. Based on this, I would suggest that we reluctantly say that all Initial-carrying datagrams need to be 1200 bytes or more. For the case of the ACK at the tail, most of the designs proposed for key discard will result in that packet not being sent. If it is sent, there are plenty of cases where it will be a packet with Initial (ACK), Handshake (ACK + ...Finished) and 1-RTT data, so the requirement is met without wasting bytes at all. But for ACK of big ServerHello messages or HelloRetryRequest, the overhead is regrettably necessary.\nSGTM. I agree this greatly incentivizes good coalescing, but otherwise I don't see any downsides. And simple is good. Even when suggesting the special case for ACK-only packets, it felt like a premature optimization.\nWorks for me. It's a corner case anyways. Unless HRR is involved, client will not send an Initial in response to the server's Initial, assuming we decide to continue dropping the Initial keys when the client receives a handshake packet.\nA potential friendly amendment, but then we should proceed. I would say after confirming with chairs, because there was a pretty strong design decision involved here, but technically this is only an editorial change."} {"_id":"q-en-quicwg-base-drafts-53340fc7b3ab7bc44572765db78a409ef579706f4264edde20a2e1f38ca7cc76","text":"As discussed on Slack, currently the text makes no mention of the case where the PUSHPROMISE arrives behind (some of) the data on the push stream itself. However, it does mention this situation for the DUPLICATEPUSH frame. This proposed change recommends buffering pushed data for an unknown PUSHID and using flow control to prevent this from being a problem. I am unsure if more text is needed (e.g., what happens if the PUSHID is never resolved (which would mean a misbehaving server)), but I think the proposed change should suffice. This is also related to issue . Thanks to NAME for his help.\nLGTM. Non-normative language seems to be the best form of guidance here\nthe push id that links a push stream to a push promise is now an unframed varint that comes right after the stream preface. this is right now the only unframed data (except made for the unidirectional stream prefaces) in the specs and I am proposing to make that a frame as well for consistency and to simplify implementation and allow code reuse. motivation: the specs already define frames that only carry a single varint (mostly push related) max push id duplicate push cancel push and goaway! here instead we decided to leave it unframed just because we can since it is at the beginning of the stream cost: two extra bytes per push stream the text will need to specify that on a push stream this new frame (PUSH_ID ?) MUST be the first one, otherwise it is a protocol error. I am happy to put up a PR, in case we reach consensus\nThe varint type and length of a frame is also unframed. So you are asking to trade one unframed varint for two. Is your assertion that this is unique and difficult code to write, or that the unframed content is somehow difficult to manage?\nThis seems to be going back to some of the early discussion we had on a thread I can't find. I'm not against discussing things again because we have more experience. However, I don't see a compelling reason for the change. My implementation needs to parse unframed data because of the stream header and for QPACK streams. I think this change might make my implementation more complex. Reading multiple varints requires more work: more checks to see if the stream has enough data, more checks to ensure only 1 frame sent at a specific time (this is the same as SETTINGS)\nNAME those are in the frame header -- so they are part of the frame. I am not sure what your point is here then. Here I am assuming that an implementation has generic code to parse frames, and that parsing anything that is not part of a frame requires special handling. While this is implementation specific and there can definitely be counter examples, from an abstraction and consistency point of view I see no reason for doing it the way we are doing it now. (If we discount the unidirectional stream preface) I believe there should be two type of streams: framed (control, request, and push streams) and unframed (QPACK, and potentially extension streams). The push stream right now is a hybrid, that only becomes a framed stream after receiving that first PUSH_ID. If I am recalling correctly when we discussed potentially having DATA frames with zero-length to transition to unframed data, it was also considered a bit unusual. Here I am arguing that we are sort of doing the reverse here where we have an unframed stream that then transitions to framed. I think this generates complexity. And really the benefit we get is saving two bytes and one frame type. NAME I do vaguely remember that discussion and not 'wasting' frame types may have been one of the arguments when we made that decision, but that is not a problem now that we have a 62bit space for frame types.\nI don't care about the cost, I care about things like having to deal with PUSH_ID (or whatever) frames in places other than the start of a push stream. And I don't believe that it is difficult to take the special-case code for pulling a varint off a partially-available stream and use that for push ID. We've not had too much trouble with that so far in our implementation.\nit is definitely not difficult, but it is a special case and as any special case forces an increased complexity in the possible codepaths, even if not that big in this case. I am making my argument for the sake of consistency mostly, and for the fact that I believe it would also help to better define error handling. With the current text, if I receive something bogus on a push stream (after the preface) I'll have to keep the stream around (because the push stream may come out of order) waiting for a PUSH_PROMISE with that ID that may never come. So I believe that framing could help error handling as well. I do not have super strong feelings about this proposal but at the same time I'd like us to try and recall what the motivations are for the current text and for making this special since it was a while back and personally I can't recall what they were.\nThis sounds like the case NAME is looking at. I'd argue this is a good reason for the current design, frame processing should only take place once the PUSH_PROMISE is received. But I'd like to understand how framing could help avoid errors.\nI was not trying to solve the same problem as here, but I was mentioning the fact that I believe framing would slightly help in a subset of cases. Take a server with a buggy push implementation, rather than a malicious one. Something that doesn't send the PUSHID at all. On the client you'll start seeing a flow of data and you can't do anything better than parsing the first few bytes of that as a PUSHID, and basically see random IDs. With Framing, if a server forgets to send the PUSHID, it is clear at the receiver that it is a protocol error. That being said there are many other classes of bugs that may trigger the same behavior even with frames, so this is minor. NAME now, that seems like an implementation dependent claim :) as says (and I like it) a client should buffer any data after receiving the PUSHID and before it has a PUSH PROMISE for it. and that is sufficient. no mention of frames there.\nRight now in draft 19 we have: Parse varint - stream type Parse varint - push ID Parse varint - frame type (HEADERS). The assertion then is, if step 2 gets skipped by the sender, the receiver will parse the HEADERS frame type as a varint expecting it to be the push ID (value 0x1, with no restriction on min or max encoding). It will then parse the HEADERS frame length as a varint expecting it to be the HEADERS frame type (length is unlikely to be 1). Doesn't the client just blow up at that point?\nIt was just an example. And as I said I don't feel strongly about the error handling nit. But if you must, yes if you assume headers is the first frame after the pushid then yes you would read 0x1 which is conveniently a server initiated bidirectional stream which is not allowed anyway, so you could reject it immediately. But I find this just fortuitous. For example, one could make a counter argument: why are we framing SETTINGS at all given that it is the first message on the control stream ? We also defined it quite clearly in this table what to do with it [Certain frames can only occur as the first frame of a particular stream type; these are indicated in Table 1 with a (1).] URL Do you think there is zero value in doing that for the PUSHID as well? I personally think there is some value. Anyway, I think we are diverting this the wrong route. All I am looking for here is a good explanation for why it is defined like it is now, and I am failing to find it. As far as the implementation goes it makes little difference, and I guess it is more of a consistency problem for me. That naked varint on the wire triggers my OCD :) So instead of going back and forth like this, perhaps NAME can shed some light on the good reasons why it is this way.\n+1 to levelling up, I'll take acknowledge my guilt here. But before I move on: If your at the stage of parsing QUIC STREAM frame payload, you should already know exactly what the stream ID is. Some historic points I remember: In Paris MT presented some ideas for unidirecitonal streams - URL This relates to issue URL and PR URL Push and control streams were identified then. And we were happy with reserving specific stream IDs for certain things. All non-reserved streams were server push streams. Subsequent data on the stream only makes logical sense to handle if you know what push it relates to, so those bytes go first. Roll forward some, QPACK design requires additional streams so those get reserved to. Around Kista, the topic of unidirectional stream types comes up. Without types, we make uncoordinated unidirecitonal stream extensions quite a difficult thing. So add more bytes at the start of stream and remove reservations. The SETTINGS frame can contain a set of parameters, indeed it SHOULD contain at least one GREASE setting. A single SETTINGS frame that describes the length of all parameters is quite useful, you know when the peer has finished sending all of them. I don't see as much value for PUSH_PROMISE because you're just framing a single integer that describes its own length already.\noh gosh yeah, I confused PUSH_ID with a stream id. scratch that completely\nOne can imagine a scenario where we'd want to place something before the SETTINGS frame on the control stream. On the other hand, it's less likely we'd ever want anything other than the Push ID to begin the push stream.\nDiscussed in London; no.\nSeems like a good editorial improvement."} {"_id":"q-en-quicwg-base-drafts-5954584f9dd32dd8a638944c14d0d5d22913849f7b5f8ced211a92d9b2e125e5","text":"Saying it \"terminates a stream\" dangerously makes it sound like it has the same meaning as HTTP/2, which is false.\nThanks for all these little patches Martin, they are a real help."} {"_id":"q-en-quicwg-base-drafts-9a11f0e7fb48d8e4ccbee887bd8778bf4f81b3e1bfd7ff9951a7b5d3099c9e2d","text":"The transport draft states the following (Section 4.4): This verbiage, introduced in PR , is ambiguous. Byte offset may refer to the Offset field in the STREAM frame; in this case, the definition above is incorrect. What we want to say is that the final size is one higher than the offset of the byte with the largest offset sent on the stream. If no bytes were sent, the final size is zero."} {"_id":"q-en-quicwg-base-drafts-7ee2eccadd9303ef1fb4e13406dd72e00d0f5c775368e9c4e70654e782e75f19","text":"No point in levying requirements on them.\nSection 8.2 says: However, this directly contradicts TLS 1.3 URL, which says: In order to say this, this document would need to Update 8446, and even then it doesn't make sense because the presumptive point of this rule is to detect attempts to negotiate QUIC with non-QUIC TLS endpoints, which don't need to conform to this RFC (and may predate it).\nI don't think QUIC can tell anyone what do with regard to any protocol that is not QUIC. And since QUIC v1 carries its own TLS record wrapping it makes even less sense.\nI agree with the problem statement, however I wonder if this opens a session ticket obtained using TLS13-over-TCP to be used in QUIC. Is that what we want?\nNAME why would this text have an impact on that? Couldn't I negotiate TLS 1.3 over TCP w/o extension and then send the resume over QUIC w/ extension?\nThe point here is to insist on the alert if the extension is understood. This wasn't intended to supersede 8446.\nNAME My bad. I was confused with the earlier versions of the QUIC-TLS draft that had the transport_parameters extension in the session tickets.\nNAME I think you could phrase that, but is it useful, given that we can't rely on it?\nProbably not. I'm OK with culling the text.\nThis might not be possible if a cloud load balancer only forwards QUIC with TLS and terminates TLS otherwise."} {"_id":"q-en-quicwg-base-drafts-a4716c9243462cd490be3c2a65319b387d12be0a5336c776ad7c290822d1acd8","text":"Makes two minor fixes in the GOAWAY text, both editorial: Correctly references the MAXSTREAMS frame instead of the MAXSTREAM_ID frame Notes that servers need not (and cannot) send a GOAWAY if the client has used up all the possible stream IDs; the client simply won't be making any more requests.\nThis is a whatever..."} {"_id":"q-en-quicwg-base-drafts-de44c104cbe81e135d1554fd67cad510859fff1ae6d424ddd14f0e0c5331696c","text":"Technically design, but barely.\nOriginally posted by NAME in URL There are three reasonable places to blow up: When you decide to stop reading a critical stream (i.e. when STOPSENDING is sent) When you discover the peer has stopped reading a critical stream (i.e. when STOPSENDING is received) When you discover the peer has closed a critical stream (when RESETSTREAM or FIN is received) My inclination would be to advise against FIN/SS/R_S on a critical stream and mandate connection termination on receipt of any of them.\nCompletely agree. This seems like the simplest and most obvious solution.\nSGTM mike\nNote that this PR does NOT address , just a comment that came up there.\nRe-reading this, I think the existing text is pretty good; it just needs a little reframing. The existing requirement is to error out \"If the control stream is closed at any point,\" without reference to whether it was closed because the peer send a FIN/RESET on their control stream or you obeyed a STOP_SENDING on yours. I think I'll add a requirement not to request closure of a control stream; do you think we need more elucidation here?\nnits, but LGTM"} {"_id":"q-en-quicwg-base-drafts-a20799360eff419bf2c6a086bebfaa9dca814c33e1a80274d4d9a528a7015836","text":"Skirts the editorial/design line. Previously, DUPLICATEPUSH referred to a Push ID in a previously-sent PUSHPROMISE frame; that frame would have caused a connection error on arrival. This says that the error should be sent regardless of which frame arrives first. May need reconciling with .\nTwo questions: Should DUPLICATEPUSH respect max push ID limit? PUSHPROMISE says the following: provided in a MAXPUSHID frame (Section 4.2.8) and MUST NOT use the same Push ID in multiple PUSHPROMISE frames What should a client do if it sees a DUPLICATEPUSH for an ID of a PUSHPROMISE that it never received? This could be caused by a race condition, so it seems sensible to allow this to clients to support this without error. We should probably document that like we did for other racy things. E.g. a Push ID that has not yet been mentioned by a PUSHPROMISE frame.\n(2) is covered in : For (1), yes it should -- the DUPLICATE might show up first, but the implication is that the ID has already been used in a PUSHPROMISE frame. If that would be invalid, we should be explicit that the DUPLICATEPUSH is also invalid.\nSGTM\nI think that we have a frame name problem, otherwise, this is fine.I think some wires got crossed in the suggested changes. Your comment points out that this might need reconciliation with but it seems to borrow the text directly from that PR. The outcome is that this PR is inconsistent with itself because at line the same error with PUSHPROMISE causes a MALFORMEDFRAME. I think the best path is to make this change be a MALFORMED_FRAME error and reconcile on if the group want that change."} {"_id":"q-en-quicwg-base-drafts-30d097dd5875ca4bacef4fe220125e09d6067910765b2e189967df16db602a85","text":"LGTM\nS 5.2. I don't believe this is possible on UNIX. What system call do I use to generate RST? Neither close() nor shutdown() does it.\nYou set SO_LINGER to {1, 0}, then close.\nI don't think there's a question here about the -http draft, is there?\nThe question is if it's possible on standard OSes. Sounds like it is on Unix. How about Windows? NAME\nThis is the same requirement as in HTTP/2. A \"MUST (but we know you probably won't)\" ?\nWell, that was a joke in 6919, so probably we shouldn't do that.\nThis is not that. We know that some implementations use TCP half-closed in HTTP. Some even rely on it to the point that things break if you don't do it. Not that things don't break for those people, but they have a narrow set of peers that they interact with. Of course, whether that set of endpoints is ALSO the same set of endpoints using CONNECT is hard to know.\nNote that this particular issue is not about half-closed. In half-closed, you generate a FIN and then keep receiving data. This is about sending an RST.\nPerhaps the text should be \"SHOULD\", and if the platform doesn't allow it, to \"close the connection\".\nThere's risk with that too, though. We encountered issues where the TCP connection was closed cleanly instead of aborted, and the client interpreted a truncated response as the (cacheable) response. It gets ugly. If you can't force a RST on the other side, you perhaps shouldn't be implementing CONNECT.\nDiscussed in Tokyo; MUST close the connection, SHOULD close with a RST."} {"_id":"q-en-quicwg-base-drafts-dd3e656a80f9d25dbdfff8c14dd3b6198ceba3ec9b44888c2ed5d6fab494f4e2","text":"Use 'on a control stream' instead of 'on either control stream'. This is consistent with the changes made by ."} {"_id":"q-en-quicwg-base-drafts-6eae9e6a0a7e19f80d4cf115dc28d125727eb7f19bcb31e34f04aec812cc1bd3","text":"in what I believe is the simplest way possible -- point to 8164 and require that be successfully retrieved prior to sending any scheme other than .\nIt would be good if it could be clarified what will happen if someone needs/tries to send a HTTP request with an http:// uri over H3. I primarily see this to occur when you have a case where you have an HTTP proxy that receives an HTTP request and then have an HTTP/3 connection with the authorative server. If the expectation is that all requests are upgraded by the HTTP/3 sending side, then that needs to be made explicit.\nI might propose this comes down to a discovery problem for the proxy. What method(s) do you envisage the proxy used to decide to make an HTTP/3 connection to the authorative server? This seems related to the discussion in HTTP WG - URL If Alt-Svc is being used, the flow might be that the proxy attempts to make a cleartext HTTP/1.1 connection to the authoritative server and does an upgrade, as described in RFC 7838, to HTTP/3. In which case I don't think any action is required. If the proxy wants to opportunistically upgrade via some happy eyeballs approach described on URL, we might want to provide guidance that either 1) it shouldn't be done for client requests with http:// or 2) that it should always be done.\nThis is another variant of -- HTTP/2 and HTTP/3 explicitly include the scheme and the authority with all requests, so the server knows what it's being asked for. The question is how the client decides that a QUIC endpoint is authoritative for this particular URL. RFC 7838 describes how an https://-schemed request might come to be sent to an HTTP/3 endpoint and the resolution of the issue was to ask the HTTP WG to define other ways. Adding RFC 8164 as well, an http://-schemed resource might also be requested over HTTP/3; it's up to the HTTPbis group whether they want to enable a direct upgrade there as well. I don't currently see any text required for this, but if you have suggestions, I'm happy to review or expand the discussion to the list.\nI'd rather duplicate this to and rely on for this. We were not very careful in that document to avoid mention of the specific HTTP version, but the technique is perfectly portable between versions. We only really intended to put HTTP/1.x off limits.\nPerhaps \"not very careful\" was careful enough -- it says \"such as HTTP/2,\" and mandates that the protocol needs a way to carry the scheme. HTTP/3 does, so we're covered. Do we think a reference to 8164 is warranted in the section about finding an HTTP/3 endpoint?\nBy \"not very careful\" I was mainly referring to the title :) As for the text, I don't know. It will depend on what else it says.\nDiscussed in London; reference 8164 and require the same check when sending a scheme other than \"https.\""} {"_id":"q-en-quicwg-base-drafts-be83751042f5dfe51c9a5285eecc3e19978dad0dd8dae9b4b624621b45a03e64","text":"I went with \"Default\" because that and \"Initial Value\" now have specific, related (but distinct) usages in the context of pre-SETTINGS settings. Swapping them seems like it would create unnecessary confusion.\nYeah that makes sense on a reread of section 7.2.5.2\nNot a new issue but: RFC 7540 has a column for . I think it makes sense that HTTP/3 does similar. Originally posted by NAME in URL"} {"_id":"q-en-quicwg-base-drafts-de312725532fa6cb756edc6d1ae47010d41094cb9e320f977be214ba60b139f5","text":"PR had a mistake in it, and some very confusing language. The rules for generating stateless resets was all mixed up in some complementary rules about generating packets that might trigger stateless resets. This attempts to split out the rules for generating stateless resets (minimum size is 5 + 16 = 21), and for generating packets that might trigger a stateless reset (min_cid + 22).\nDo we need discussion on the fixed QUIC header bit in the triggering packet and outgoing reset packet?\nNAME if you can lay out what you think the security concerns are in another issue, that would helpful. I realize that there are plenty of compromises in this design, which make this less than ideal. But first I'd like to try to understand where you think this is falling short.\nDo we need discussion on the fixed QUIC header bit in the triggering packet and outgoing reset packet? Originally posted by NAME in URL\nMaybe I'm misunderstanding the question, but isn't the spec is completely clear on that? A stateless reset has the first bit unset and the second bit set.\nAgreed, I think this is already clear and I don't see any need to change it.\nThanks both. I just wanted to check and didn't feel like I had the brain cells to answer that. I don't like relying on the image only, but the text says \"two fixed bits\", and stateless reset - unlike Version Negotiation - is version-specific. So I'm closing with no action.\nI believe this PR addresses the stated issue, which is that the text is unclear. However, I am still concerned about the security properties of this design."} {"_id":"q-en-quicwg-base-drafts-7b1a5a17f71f21f4616ced5ffe78b33ff093be40ae0f7f4f4058c945e438ac1e","text":"This is implied in some spots, but never stated with normative language.\nUnder \"Handshakes and New Paths\", it says \"Data at Initial encryption MUST be retransmitted before Handshake data and data at Handshake encryption MUST be retransmitted before any ApplicationData\". I moved that down to this section, since I think it fits slightly better here.\nIn terms of packet size, this is similar to what the old Handshake text recommended and what PTO should have said all along, as it aligns well with TCP's TLP and RTO behavior.\nThe previous text read like it only described retransmission of lost data, and this now reads like it only describes retransmission in PTOs. Are both intended? Is there a restriction anywhere that retransmitted data must not be sent at a higher encryption level than it was originally sent, or similar?\nAll of this text is in the PTO section, so only applies to PTO. I think transport is usually the best place to talk about where data is retransmitted, but it might be worth an editorial note that data is never retransmitted at a different packet number space than the original one.\nThanks for opening other issues, NAME I'm going to merge this, since I think it's a net betterment over what we have. We can continue the other discussions on the other issues/PRs.\nAt the New York QUIC interim, we dealt with the problem of the server being blocked by the amplification factor by making the client arm the PTO until it knew the server had validated the path, even if it had nothing to send. From the recovery : When reviewing URL, I realized that if we allowed the server to send a PING-only packet when the PTO fired and the amplification factor had been reached, then that would elicit a padded Initial ACK frame in response, which also fixes the issue. NAME though that was a simpler solution than the current approach and this is a bit of an edge case, so simple is probably good. The original issue was\nIn a strict sense, this means that a client can use N bytes to trigger the server to send 3N + epsilon bytes, but this fits in nicely with Probes generally not being subject to sending limits. It is somewhat gross but more elegant that a client timer with nothing in flight.\nI think this might be concerning, though I could well be wrong. Consider the case of an attacker acting as a client, using the IP address of a victim as the source address. The attacker can spoof an Initial packet that carries a Client Hello and another Initial packet that carries an ACK for the server's Initial packet that carried the Server Hello. An attacker can build the correct second packet assuming that the server uses the DCID of the client's Initial packet as it's CID (or the CID is zero length), and that the server's first packet number can be guessed. If that ACK contains a delay that indicate to the server that the RTT is very small (e.g., 1ms), then the PTO could be as small as that value. Then, if the server's connection timeout is say 1000ms, the server might send 1000 packets to the victim before dropping the connection. Am I missing something?\nPTO always uses exponential backoff, so if the first PTO was 1ms, that'd be ~10 PTOs, not 1000. In terms of total bytes sent, 10 packets with a 1 byte payload are still smaller than a single full-sized packet.\nIsn't this concern why we designed the Retry mechanism in the first place? If we are concerned about address spoofing attacks, the simple solution might be to generalize retry. As in, \"if the server can predict that its first flight will contain more than 3 times the amount of data received with the client's Initial packets, then it SHOULD verify the source address using Retry.\"\nMy concern is that the current anti-deadlock mechanism is fairly awkward and rarely needed, which makes me concerned it will be incorrectly implemented. By sending a PING only packet, the server is essentially informing the client it has something to send, but can't send it due to the amplification limit. The deadlock happens when the client's Initial is acknowledged, but the client doesn't receive the server's Initial, so it can't decrypt any handshake packets. In this state, the client has nothing to send. It can happen even if the server's first flight fits into 3 packets. But it's expected that typically servers will bundle their ACK of the client's Initial with the server Initial, and this will be rare in practice.\nNAME Fair. Due to the risk of amplification attacks, the server is probably not willing to retransmit Initial packets on timer. In that case, the current spec is a bit too lose. It should say that \"If the sender is not willing to retransmit Initial packets due to protection against amplification attacks, it MUST NOT acknowledge the client's initial packet.\" So basically, either the server verifies the client's address with the Retry/Token mechanism, or it MUST NOT acknowledge client initial packets before continuity can be verified by obtaining an ACK from the client.\nThat's an interesting suggestion, but I think it require some non-trivial changes to the server's logic in addition to not sending ACKs? Something similar to what's described in . It may be worth opening up an issue for allowing the server to not arm a timer until the path has been validated, because I think that's a wider issue than just this PR?\nI agree that having the server repeat a PING on timer may also work, and that the ACK of the randomly chosen Server's CID would make a reasonable address ownership test. ACK of the PING would naturally unlock any transmission that is pending address ownership. Kazuho objects that this his hackable if the chosen Server's CID is guessable. We don't have a \"non-guessable\" requirement in the spec. In that case, the natural fix is the challenge/response mechanism.\nNAME Ah. Thank you for pointing that out. Though it seems that the amount of the data that the server can send could go like 4x the amount of data that the client has sent rather than the current 3x. To paraphrase, I think what we might be trading here is the initial amount of data the server can send vs. the capability of sending PINGs. My view is that the former is a scarce resource, and that I'd prefer having the more space in the former even if requires us to have some complexity.\nNAME An Initial ping is maybe 45 bytes. So 10 pings makes the amplification ~3.4. I was under the impression that the 3 multiplier was semi-arbitrary, so I'm unconcerned about a fractional increase.\nI have a couple of thoughts. First, this change is not necessarily an overall simplification. It makes things slightly simpler at the client, but it adds about as much code to the server. Additional code has to be added at the server to the PTO response to only send a PING and not a retransmission when the path is not yet validated. Second, I think it does open up an interesting attack vector that is currently absent. NAME notes that there is amplification, but I don't think byte amplification is the interesting one here, since the total load is going to be < 1 MTU. The interesting amplification here is packet amplification, which is a useful attack since UDP and packet processing costs at endpoints / clients is a relatively high cost. And you can get a fair number of packets back from the server for spoofing a couple of packets, in spite of the exponential backoff. I'll grant that the attack is a bit of a stretch, and does require that the attacker guess the server's CID and packet number. Though, as NAME pointed out to me offline, we allow the server to use a zero-length CID. I don't think this change is necessary. And on balance, I don't think this change is a good one.\nI do like the idea of sending a PING frame from the server as a signal to the client that it is blocked. I think we can suggest that as an optimization that a clever server might implement under the 3x limit, or when it reaches the 3x limit.\nNAME While endpoints can do that, I am not sure if it is a good idea to endorse such a behavior in the specification. The reason we have so far been fine with having the 3x limit being applied until the Handshake packets are exchanged is because we assume that the size of ServerHello would not be that large compared to the size of the ClientHello, and therefore that the chance of hitting 3x limit before path validation is minimal. It's a hack. If we are to address the corner cases that the server hits after sending 3x amount of data, then I think we should fix the hack than putting another hack (i.e. the PING that signals 3x limit) above the existing hack. I mean, if we think that defining a way that allows the server to unblock the 3x limit is important, we should be considering allowing the use of PATHCHALLENGE / PATHRESPONSE frames in Initial packets.\nI believe this is simpler to implement assuming one is implementing both a client and server. I already have special case code to send a PING when there are bytes in flight but nothing to send, so this is very similar. I agree it's not necessary, but I find the simplification appealing, and when we first discussed this problem, no one suggested this solution, so I thought it was worth writing up. If we keep the existing text that the client has to keep sending, I'm not sure doing this on the server side is a valuable optimization, so I'd prefer to stick with one mechanism or the other. To me, the main negative is the server sending extra datagrams. And maybe that's enough to stick with what we have.\nWhen a PTO fires, the endpoint is supposed to try and retransmit something if there is data to retransmit. In this case, you'd need to special-case it so that the server doesn't in fact retransmit but sends a PING instead. That's what I meant by adding more code.\nIt also SHOULD send a PING only packet if there are bytes in flight but nothing to send.\nNot new data, any data. This was deliberate. See Section 5.3.1: \"When the PTO timer expires, and there is new or previously sent unacknowledged data, it MUST be sent.\" The point of this text was to send data if available, new or previously sent. In the case that we are discussing the server is supposed to send a PING, but it might have data to send.\nDiscussed in ZRH. NAME to decide if he wants to pursue this, potentially discuss with NAME and/or NAME\nI realized there is one major benefit of keeping the existing design, which is the client is guaranteed to have at least one RTT measurement in this circumstance, whereas the server may not. For this reason, I'm inclined to close this with no action unless others feel strongly otherwise.\nNAME came up with the same point simultaneously, so let's close this with no action.\nThere are multiple mentions of spurious retransmission in the recovery draft, but no text on what detecting this may require from an implementation. Maybe there should be? \"Spuriously declaring packets as lost leads to unnecessary retransmissions and may result in degraded performance due to the actions of the congestion controller upon detecting loss. Implementations that detect spurious retransmissions and increase the reordering threshold in packets or time MAY choose to start with smaller initial reordering thresholds to minimize recovery latency.\" From comments on\nSome discussion of implementation strategies here would absolutely be helpful; it doesn't obviously fall out of Quinn's current architecture.\nNAME and NAME indicated my effort at adding more text wasn't that useful, so I'm closing this.\nAs of URL, if the client's Initial is lost, there is no required behavior that leads to it being retransmitted. Because the client has received no ACKs, no is set, and no loss is detected, leading to the client only sending probe packets, which are not required to bear any particular data (\"A probe packet MAY carry retransmitted unacknowledged data\") so long as they are ACK-eliciting. The server, having no connection state, cannot do anything with these packets. Quinn probes are currently a single PING frame (plus padding, when appropriate) for simplicity, so implementing the new logic immediately broke our tests. If recovery logic now requires probe packets to bear specific data to function correctly in certain circumstances, this should be specified explicitly.\nI also hit this issue too. After several hours tries and error, I came to the conclusion that implementation should mark the in-flight Initial packet lost in this case, and resend it. Draft-22 has explicit statement that implementation should not mark in-flight crypto packet lost, but draft-23 does not have it anymore. BTW, draft-23 does not allow PING in Initial or Handshake packet.\nWhy not retransmit the data without declaring the in-flight packet lost?\nI think it is rather implementation detail to internally mark in-flight packet lost but keep it in order to catch late ACK.\nThere's a related problem where if the server's first flight goes missing, the only response the new logic has is to send a probe at unspecified, i.e. highest known, encryption level. The server will generally have 1-RTT keys at this point, so it sends a 1-RTT probe, which the client, not having received anything prior, cannot decrypt and drops. As above, this was previously handled by explicitly tracking whether any data was in-flight, but that got removed.\nThe packets should not be marked as lost, but data needs to be retransmitted before sending a PING. I sent out a PR , feel free to make further suggestions there.\nI'm marking this design because it adds some normative language, though I'll note that it doesn't change what I originally intended the text to say."} {"_id":"q-en-quicwg-base-drafts-6daca996b5a46bf5edf8aaaec24cb7e2eb553bd683c324648ac60e6fe297dd97","text":"Use the correct ABNF grammar to express repetition and fix the Reserved Frame Type link.\nThere is some baggage with this parameter - URL People were a bit grumpy with changing their parser 18 months ago, I think they might be even grumpier now that we have even more client implementations in the wild. What is the benefit of this change? I think we need an issue for due process and discussion.\nOh joy. This caused me to go look at our parser in Chrome, and realized we're still doing the old (pre 1097) quic=;quic= format. So Chrome has work to do no matter how we resolve this. That being said, I'd really rather not have two different ways to do this, so +1 to NAME 's comment on the motivation.\nI can remove the additional unquoted token change and keep this PR purely as editorial fix. Before the PR, the current text does not have correct ABNF grammar. I added the unquoted token variant since it is permitted (but not required) by the grammar at URL Since HTTP/3 over QUIC is not the only user of this Alt-Svc header, I can imagine that generic parsers are written such that they already understand the unquoted token value. Thus, adding support for this should not be an additional burden? I have no strong opinion here. The additional CPU cost of handling both unquoted and quoted forms should be neglible. Disallowing the unquoted form is a restriction of the RFC 7838 grammar. Should I bring this to the list and/or create a new issue?\nLet's create a new issue. You're correct that non-HTTP/3 implementations might need to handle this - hence the original issue which related back to structured headers (an attempt at defining some best practice in this regard -URL). Part of the problem is not CPU cost of DQUOTEs but expressing \"a list of versions\" and making sure that implementations know how to do that. Alt-Svc defines the parameter syntax but not how to handle repetition of parameters.\nI removed the single quote change, this should be a purely editorial fix now. I'll create a new issue for the unquoted parameter value. The \"list of versions\" situation does not change, if you only support one version you would write: If you support multiple, you would still use one parameter, but you have to quote it since a comma is not valid in a .\nAccording to URL, the Alt-Svc parameter value can be unquoted: This means that this can be written: instead of only: The current text in URL forbids the unquoted variant which is an additional constraint on the RFC 7838 grammar. Should unquoted parameter values also be permitted? Note that this does not change the situation when multiple versions are supported since an unquoted \"token\" type is not allowed to contain commas. See also the discussion in .\nRFC 7838 allows unquoted strings, but there's nothing preventing new specifications of Alt-Svc parameters from being more restrictive. I'm not sure the added flexibility of allowing unquoted versions buys us much here, and it comes at an implementation cost.\nIf alt-svc can be specified in configuration files, it is likely to result in interop issues if unquoted is not permitted.\nI support allowing unquoted token for alt-svc quic parameter. quic=1 and quic=\"1\" are semantically the same, and there is no reason to disallow unquoted token. Moreover, requiring quoted-string means that we cannot use generic alt-svc parser because such parser can parse quic=1 without an error. You need to write your own parser to detect that the DQUOTE is really there. If you think that detecting it is not necessary, then requiring quoted-string is not necessary in the first place.\nPeople and implementers will be surprised (and do wrong) if the rules and constraints of RFC7838 aren't kept.\nTo highlight my earlier point, on slack the following was posted The issue appears to not be the unquoted ma=3600, but it does suggest we will see unquoted strings in the wild, and URL is said to also use unquoted ma=3600.\nBut that's the ma= field, not the quic= field, right? (This issue is about the quic= field, I think)\nAs I said elsewhere, the current text in draft-23 stems from a Structured Headers issue related to wanting to advertise multiple versions of QUIC. URL Alt-Svc isn't a structured one but if it were (e.g. if someone wanted to write a generic Structured Header parser) then there was a danger that the rules preventing dictionary key repetition would have prevented an endpoint sending multiple parameters e.g. . To mitigate the risk, the quic parameter was changed to a DQUOTED comma-separated list as part of URL This issue addresses the correctness within the Alt-Svc spec but does it reopen the can related to Structured Headers?\nI think that the problem here is independent from whether structured headers should be applied or not. - Structured header issue: duplicate keys. - Invalid grammar according to RFC 7838 since \"token\" does not include commas. - valid grammar according to both RFC 7838 and the QUIC description. However, if only a single version has to be advertised, then the quoting is not necessary: - valid grammar according to both RFC 7838 and the QUIC description. - valid grammar according to both RFC 7838, but QUIC currently forbids it. In I proposed to fix the grammar (editorial issue). However, at the same time I could also make the grammar match RFC 7838 (the last bullet point above). NAME I would mind not making the semantics more restrictive (e.g. restricting the values of the QUIC parameter to hexadecimal only), but restricting the parser grammar would impose extra restrictions on those who have to generate the header. It is conceivable that implementations will continue accepting the RFC 7838 grammar, and end up accepting while others would reject it and accept only. By making sure that the grammars align, I hope that this becomes a non-issue. So far NAME NAME and NAME seems to support this proposal and NAME is not sure. Can this be considered consensus? :smiley:"} {"_id":"q-en-quicwg-base-drafts-c9515b49797866f664fc270fad18092057a0c65e256ae5eaf58ac19c99663818","text":"The current text was unclear to me: how does one enforce a MUST on TLS? The updated text hopefully clarifies that it's legal for a server to send a NewSessionTicket without the \"early_data\" extension - that indicates that the server supports resumption but not 0-RTT, as it does in TLS."} {"_id":"q-en-quicwg-base-drafts-569ecc7f498b326c50be2924c35ef7291f97ef4696a6b0ddb87933a91b002b5f","text":"NAME The text in the PR adopts 1 PTO as a suggestion, as I am not sure if there is a point in that value being the exact limit. If it needs to be, we can rephrase the text to \"MUST retire the corresponding connection IDs in one PTO and send corresponding RETIRECONNECTIONID frames.\"\nI'm a bit worried that a stateless reset from a retired CID could be used as an attack vector by simply delaying packets en-route. However, if the reset is only effective on CIDs that are no longer recognised that shouldn't be a problem.\nAt the moment, states, quote: Upon receipt, the peer SHOULD retire the corresponding connection IDs and send the corresponding RETIRECONNECTIONID frames in a timely manner. Failing to do so can cause packets to be delayed, lost, or cause the original endpoint to send a stateless reset in response to a connection ID it can no longer route correctly. As the text uses \"SHOULD\" rather than a \"MUST\", some view it as an optional requirement to switch to using a new connection ID in such a situation. Others think that it is a mandatory requirement for an endpoint to switch to a new connection ID. If we are to consider the latter behavior to be the case, I think the text should better be changed to \"MUST retire the corresponding connection IDs and send the corresponding RETIRECONNECTIONID frames in a timely manner\".\nMy personal take is that it would be nice if the processing of NEWCONNECTIONID frame and RETIRECONNECTIONID frame can be an optional feature. QUIC is already very complex, and it would be beneficial to have some features being optional, in sense that endpoints can simply discard certain frames when not implementing certain features. IIUC, at the moment, these two frames are optional for an endpoint that do not support migration. OTOH, from the robustness viewpoint of the protocol, it does make sense to change the text to MUST. We might have already crossed the river by introducing the Retire Prior To field. The third option is to add a TP that indicates if an endpoint supports retirement of connection IDs...\nThere was a lot of discussion when the Retire Prior To feature was added. At the time, I felt it pretty clear that it could not be optional from our server's perspective. I agree the sentence with the SHOULD, if read in isolation, could be interpreted as making the feature as optional. But the sentence immediately following it makes it pretty clear that it's not option. I understand the desire to make things optional for simplicity's sake, but a primary point of this feature, when it was added, was that is was required. Making it optional now would, IMO, defeat the purpose of the feature.\nMy understanding is that this is optional in the sense GOAWAY is optional. At some point, the connection is likely to terminate abruptly if you don't follow the request.\nThat's one way to look at it. The difference here is that we're not trying to kill the connection with Retire Prior To.\nThere was definitely lots of discussion around the requirement level here when this was added (see ), NAME asked: There was good discussion following that ( and scroll down). My understanding of was that the endpoint being asked to retire a CID really SHOULD retire it or else the peer may stop responding to that CID. After the endpoint requesting retirement gives up on a CID and forgets about it, packets will either arrive with that CID or they will not. It has two choices, it can choose to process them if they arrive with the wrong CID, or it can drop them. The enforcement mechanism here is to stop processing the \"bad\" CIDs, not close the connection, especially as you take into account potentially delayed-but-valid packets. I think the current stance of \"really SHOULD stop using such a CID, but the other endpoint needs to be careful about enforcing that\" is the right one, if there's a wording update that would help with understanding there, then it would be great to update the wording as an editorial issue.\nWhatever the answer here, the SHOULD was chosen because \"timely manner\" isn't defined in a testable fashion. If someone wants to argue for MUST, then we need a firmer definition of the reaction.\nAnd to agree with NAME the squishiness here only exists up until the point that this is \"do this or else\". We should make it clearer that the cost of non-compliance is losing the connection. Not always, but a endpoint is entitled to cut and run if their peer doesn't comply.\nNAME Thank you for the explanation and the links. They help a lot in understanding how we ended up here. NAME Yeah we already state that \"an endpoint MAY discard a connection ID for which retirement has been requested once an interval of no less than 3 PTO has elapsed since an acknowledgement is received for the NEWCONNECTIONID frame requesting that retirement.\" So I think it's testable. Considering that we have a MAY on the side that initiates the retirement, I think we can have a MUST on the receiver side (assuming that we want to).\nThat almost works, except that PTO isn't a shared variable. It's subjective and so endpoints might disagree on where this line is. At 3PTO, that's probably not a disagreement worth having, but you are effectively recommending instant compliance if you set the bar that low, as packet loss on any packet sent that attempts to comply might cause it to take up to 3PTO to get through.\nIIRC, the base of the SHOULD was that we didn't have a specific timeout anyone felt comfortable enforcing in the absence of shared timers between peers. We've had a general principle that requirements you can't tell whether the peer violated are probably SHOULDs instead of MUSTs. As Nick says, the SHOULD is intended to refer to doing so promptly, not to doing so at all.\nWhile I agree that we have tried to make MUST requirements testable, I am not certain if it is a good idea to avoid use of MUST when a requirement cannot be tested (or if doing so is a good idea, per the definition of the keywords in RFC 2119). At least for this particular case, isn't it a requirement for an endpoint to drop the old connection IDs? Though I could well be wrong though in how the keywords are used in practice.\nI'm fine saying MUST drop in a timely manner, and being explicit about what we mean by timely manner (3 PTO from the peer's perspective). Obviously, from the peer's perspective, that is enforceable. Locally though, it's obviously not as clear cut.\nWe need a PR. NAME do you think that you can do this? As discussed, should be that the receiving side will retire within one PTO. The sending side can enforce at 3PTO, by dropping the connection.\nDiscussed in Cupertino. NAME to quickly rev the PR to incorporate NAME suggestion.\nNAME the delayed packets are handled by this text: If you've followed the requirement to retire as indicated, then a delayed packet triggering stateless reset can't hurt you.Looks fine to me. I'm also fine with Martin's suggestions as well.The endpoint here has always been going to do whatever it does, regardless of whether we said SHOULD or MUST, so this seems fine. I do like the direction for the other endpoint to wait 3PTO, that would be nice to include."} {"_id":"q-en-quicwg-base-drafts-705c4a2e4834545d72a4e5df364655b843e4206a39b489814650c71dfa7cc41a","text":"At the moment, we state that a QUIC packet that contains frames other than ACK and PADDING (), but I wonder if that is true for CONNECTIONCLOSE frame. A CONNECTIONCLOSE frame never elicits an acknowledgment. IIUC, it is not meant to be governed by the congestion controller. Considering the aspects, I think we should add CONNECTION_CLOSE to the list of non-ACK-eliciting frames.\nThat also ensures the congestion controller never inadvertently blocks sending a CONNECTIONCLOSE, which is consistent with the intent. If we make this change, I believe APPLICATIONCLOSE needs to be on the list as well. Now that we discuss ack-eliciting in transport, I think we'd need to change both transport and recovery and mark this design. So I think this would be a slight improvement, but I an also imagine people claiming that CONNECITON_CLOSE is already special, so there's no need for a change.\nAPPLICATION_CLOSE is no more. This seems like a fine change.\nThis seems sensible.\nDiscussed in Cupertino. Obvious correction.\nSeems right to me. Editorial tweaks suggested."} {"_id":"q-en-quicwg-base-drafts-c6f64b7c635501c9bf59804800baf9e650908e76980eb2f9bfed2ebb71b71b70","text":"Some definitional work for terms that were poorly used. Note that \"host\" in the remaining uses is strictly interpreted in the RFC 1122 sense, so I left those alone. I considered sorting the terms section differently, and might still alphabetize that, but this isn't problematic just yet.\nThe word \"host\" is undefined. If it is equal to \"endpoint\", it should be replaced. Otherwise, \"host\" should\" be defined in Sec 1.2. The word \"address\" sometime means \"IP address\" and \"a pair of IP address and port\" in other cases. Sec 1.2 should define \"address\" as \"a pair of IP address and port\". And \"IP\" should be added in front of \"address\" which means \"IP address\".\nfixed this.\nThank you!"} {"_id":"q-en-quicwg-base-drafts-85eba4bf402148f43b0e26aebd0ce29110529e8f27faecf7f3e1c772dde33a87","text":"As concluded in Singapore.\nWhen receiving a packet with an unexpected key phase, an implementation should compute the new keys: This creates a timing side-channel, since an attacker can modify the KEYPHASE bit, thereby making a peer compute the next key generation. The packet will be discarded, since the header is protected by the AEAD, but the fact that the key is computed on receipt of that packet is a side-channel that might be used to recover the header protection key. The way to avoid this side-channel is to pre-compute the N+1 key when rolling over keys from key phase N-1 to N. An implementation would then use the N+1 keys when receiving a packet with an unexpected KEYPHASE, in exactly the same way it would do decryption on a packet sent at key phase N. This however contradicts the text that claims that an implementation may choose to only keep two read keys at any given moment: Directly after a key update, an implementation would have to store (at least) 3 key generations: N-1 for a duration of 3 PTO, in order to be able to decrypt delayed / reordered packets, N, as well as the precomputed N+1 to avoid the timing side-channel described above. I don't think that storing 3 keys is a problem (I think NAME disagrees, I'll let him explain his reasoning himself), but the spec shouldn't pretend that it's possible to create an implementation that both isn't vulnerable to the timing side-channel and at the same time gracefully handles moderate amounts of reordering around a key update.\nI think you are correct in pointing out that there is a timing side-channel, and that we need to fix the issue. OTOH, I disagree that requiring all endpoints to be capable of retaining 3 keys is not a good way of fixing the issue due to the following reasons. Firstly, I do not think we can justify why an endpoint needs to be capable of retaining that much keys, even though Key Update is expected to happen very rarely. The purpose of Key Update is to replace a key after extensive use, we would never be using a key that becomes weak just after couple of PTOs. Considering that, using 2 keys should be sufficient; 3 keys is likely a sign of a needlessly complex design. Let's compare the proposal of the 3-key design with a 2-key design (that allows the receiver to delay the installation of the next key for 3 PTO after since Key Update, which is ). In the 3-key design, the next key is installed when a Key Update happens. The old key is disposed 3 PTO after a Key Update, and the slot becomes NULL. Note also that an endpoint needs to retain a mapping between packet number and key phase to determine whether an incoming packet is to be decrypted using the oldest key or the newest key (those two key share the same KEY_PHASE bit on the wire). In the 2-key design, installation of the next key and the disposal of the old key happens at the same moment (i.e. 3 PTO after a Key Update). An endpoint would always retain 2 keys once the handshake is confirmed. There is no need for retaining a mapping between packet numbers and key phases. The oddity of the key phase bit directly maps to the decryption key. To me it seems that the latter is simpler. As stated above, there is no need for doing Key Update every 3 PTO, and therefore, assuming that we need to make a change due to the attack vector, I think we should simply allow endpoints delay the installation of the next key rather than changing the increasing the minimum state that an endpoint needs to be capable of maintaining.\nI agree, there's not really a need to do key updates that frequently. However, I like the fact that we're currently defining a clear rule: An endpoint is allowed to update to N+1 as soon as it has received an acknowledgement for a packet sent at key phase N. You should still keep track of the packet number at which the key update happened, in order to check that the peer is not reusing the old key to encrypt packets after it already switched to new keys. Strictly speaking, this is not required for a working implementation, but it's a really easy check to catch a misbehaving peer (or detect an active attack). As I've mentioned in the discussion in , the reason I'm skeptical about delaying key updates by 3 PTO (and effectively dropping all packets with a higher key phase until this period is over) is that we're replacing a very clear definition by an approximate rule: the PTO is calculated locally, and depending on the network conditions, the two endpoints might have different opinions about the exact value of the PTO. An endpoint that wants to ensure that no packets are dropped due to the unavailability of keys would have no choice but to wait >> 3 PTO. While this should be well within the safety limits of the cipher suites we're using, not having a clear criterion seems to be a sign of suboptimal protocol design.\nPerhaps we should state that an endpoint should precompute the key early but not update keys too frequently to prevent timing attacks and overhead. That is not necessarily a bad thing, but it is unfortunate that there is no clear definition. We could also say the idle timeout period. As long as we compute keys using SHA256 it might not be a concern, but computing keys too frequently could exhaust safety parameters on other potentially faster algorithms.\nTo clarify, the key phase bit is inside header protection, so the attacker in question is the peer?\nNAME No, this is an on-path attack. Header protection is not authenticated, so an attacker can just flip bits in the header. Decrypting the packet will fail then, since the header is part of the authenticated data used in the AEAD. We have a timing side-channel as soon as we treat packets differently based on values in the unprotected header, before authenticating the packet. This is analogous to the .\nThanks for the extra explanation, I'm not sure that's a very good side-channel, but I agree it is one.\nDoes anyone have any ideas what kind of actual attack could take advantage of this? The attacker can only cause you to install your keys early once per key phase change. Assuming you hardly every change key phase, what then? I agree it's a way the unauthenticated attacker can change your state, but I don't think it's a big enough problem we need to worry about fixing.\nThat's a good question. I agree that this attack vector doesn't look very promising at first sight, but then again I'm always surprised what cryptanalysts can extract from a single bit. That depends. If you create the new keys, try decrypt the packet, and then discard the keys (if decryption fails), the attacker has an unbounded number of tries. Obviously, this would also be a DoS vector, since computing new keys is more than order of magnitude more expensive than an AEAD operation (at least in my implementation). If you cache the new keys once they are computed, then the attacker indeed only has a single shot per key phase. However, then you might as precompute the new keys when installing the current key phase. The problem with that is that the spec currently suggests that you only ever need to keep state for 2 key generations at any given moment. This is incompatible with caching the keys for the next key phase, as I described above.\nNAME But the current spec. also recommends to wait for 3 PTO before initiating the next Key Update. That would not change, and as we agree, nothing in practice requires us to do Key Updates every 3 PTO. I agree that an endpoint might still want to track the packet number at which key update happened. But even in such case, the simplicity of the 2-key design still exists, in sense that you'd only use it for detecting a suspicious peer. It's much straightforward and easier to test, because the rare case (of actually using 3 keys at the same time) never happens. To clarify, I am not saying that 3-key design is incorrect. My argument is that lack of clear signal is a non-issue (because there is no need to do a Key Update every 3 PTO or more frequently), and that therefore that argument cannot be used for prohibiting people to choose the 2-key design (which is simpler).\nIsn't the middle ground here to pre-compute the N+1 key when the N-1 key is dropped, with the drop happening at the first of: 3xPTO after the N key was first successfully used First arrival of a packet which purports to be N+1 The attacker's ability at that point would be to cause any delayed N-1 packets to be dropped, but if we're presuming the ability to bit-flip, they can cause those packets to be invalid and dropped anyway.\nNAME That certainly reduces the information being potentially exposed to the attacker to one bit per every key update. But no less than that for example when an attacker injects two packets slightly more frequent than every 3 PTO, with the only difference between the two packets being the value of the KEY_PHASE bit. In such an attack scenario, the attacker can tell which of the two packets caused the calculation of the updated key for every key update. Admittedly, the leak is tiny; but I am not sure if it is tiny enough that we can ignore.\nTo recap my comments at the mic: it's not clear to me what the attacker is learning here. Can someone expand on this?\nDiscussed in Montreal: NAME to work with NAME and NAME to work out the details\nI'd like to second NAME comment at the mic. I don’t think this is an issue for which we need text. Without loss of generality, let’s assume that in the worst case an attacker learns the entire unprotected header bytes for every packet. (That is, that header protection does nothing overwhelmingly helpful.) Revisiting the header protection algorithm, it basically works as follows: where H’ and H are the protected and unprotected header bytes, respectively, and sample is known to the adversary. If we assume mask is the output of a PRP, and therefore both IND-CPA secure and KPA secure, an adversary learns nothing from known plaintext and ciphertext pairs. (The typical game-based IND-CPA security definition says that an adversary with access to an oracle for encrypting any message Mi of its choice has negligible probability in selecting b, upon giving the challenger M0 and M1 and receiving an encryption of Mb. Recovering the header protection key would break this assumption, as it would let the adversary encrypt M0 and M1 -- without the oracle -- and trivially match against M_b.) All that said, I think adding text describing that endpoints should precompute keys is fine. After all, it'd be nice if header protection worked as intended.\nNAME and NAME to resolve their issue before we'll take it forward\nDiscussed in Singapore; proposal is to change SHOULD to MAY regarding pregeneration of next phase of keys."} {"_id":"q-en-quicwg-base-drafts-1251ac9ee63f8a50f01b3ed969c0b31412ac2fb736eef89a74d7d6dea7f6d44a","text":"The Retry packet can be discarded.\nThe Retry packet currently allows sending an empty token. The Retry token is defined as an opaque blob where \"indicates that x is variable-length\", which also includes zero-length values. On the wire, an Initial packet with a zero-length token is identical to an Initial without a token. However, there's a difference on the client side, since a client will perform at most a single Retry.\nTagging this editorial. This was clearly intended to be considered non-zero length."} {"_id":"q-en-quicwg-base-drafts-fe01872b9688dbcbd5182856d820f8bc5229e68ef041b3db14b9d5c13342da50","text":"Addresses feedback from NAME and NAME to add a reference to the recovery doc on handling bursts resulting from flow control credit; that advice on when to send *_DATA frames (at least 2 RTTs before sender is blocked) is too specific. I've also done some editorializing to fix flow.\nIs this the right place to discuss pacing? In my mind, we’re talking about different layers here. Bursty sending could happen due to many reasons, only one of them is due to increased flow control credit.\nYeah I was wondering that too. We do talk about application limited in the recovery doc, so maybe that's the place to clarify this. I'll see (after the weekend) if it makes sense to add anything there.\nI think we should discuss what to do in one place, but given the sort of traffic bursts that can arise if you happen to get this wrong, I am still keen to see a couple of sentences here that explain the pathology when a sender has cwnd and no credit, and then suddenly get lots of credit. We see the enormous burst, and it is REALLY not nice if you don't have a pacer to fix it (or figure out the receiver should have sent the credit earlier).\nNAME in the recovery draft covers the general case of an under-utilized cwnd, which applies here. The same bursting can happen with a bursty app as well, but we've said it all in one place -- in that section of the recovery draft. I think that section could use some editing work (which I'll take on separately), but we've already said there that it applies to flow control. I think that ought to cover the pacing concern."} {"_id":"q-en-quicwg-base-drafts-1d6476327ccca9b30d0e2da7ea868c569d10bbece10f89cbda4d48c9db59b3fe","text":"Fixes issue\nSect. 7 in transport refers to packer number spaces without any reference to what that might mean. These are only introduced later in sect. 12.3. Also the text talks about crypto sequence numbers and the text is not very clear about these being different from packet numbers if you haven't been through TLS text.\nThis is still an issue. I think adding references to where packet number spaces are mentioned (List of Common Terms and Section 7) would be sufficient.\nSounds great, would you be willing to write a PR NAME\nFixed in ."} {"_id":"q-en-quicwg-base-drafts-55f9e61f1b658627a09f916d188ba3108139c612189eb0e554aaf390b0ebd2cc","text":"Fixes issue\nThanks for this. It's a lot clearer this way.\nrestricts frame type varint encoding: The relevant sentence is: This means that the restriction may not apply to frames defined by QUIC extensions (such as the frame in the Delayed Acks_ extension). Must each extension spell out its own frame type encoding rules? It would be easier to change the transport draft to extend the minimal encoding restriction to not-as-yet-specified frames as well.\nI read the first sentence as applying to all QUIC frame types, core or extensions. The examples confuse the matter by explicitly mentioning the core frames. Section 19.21 contains some guidance for extension frames, adding a pointer back to the encoding rules from would be a good editorial change.\nThe sentence I highlighted talks about frames in this document and then these frames, effectively overriding what the first sentence may imply.\nyes I think we agree :)\nI think we all agree that it can be made more explicit. The first sentence says that the rule for all frame types is that they be minimally encoded. Applying that principle to all frame types defined in this document means that they get one-byte encodings. We should clarify that it's an application of the rule, not a limitation of it.\nWhy did we have this rule in the first place? It just seems to complicate things in unexpected ways.\nNAME See . IIRC the rationale for using minimal varint encoding is that stacks that support only the standardized frames can live with a simpler dispatch scheme that takes one byte as an input (regardless of the dispatch method being a table or a switch clause or whatever). While I do not have a strong opinion regarding that requirement, I'd prefer not revisiting the discussion.\nPR has been merged -- closing bug."} {"_id":"q-en-quicwg-base-drafts-5aaf8ef0e3c9650ebb5288b824c80fc676b2a66b50e1d703e0f8d8bff0151f54","text":"These are all MUST in recovery and the editors discussed this and agreed that these should be MUST too."} {"_id":"q-en-quicwg-base-drafts-132fd33bc0b50d111bbf738417a488c7ad4d0ed260620820b69d62752137d010","text":"Clarifies why a client might send a Handshake packet, as well as improving the surrounding anti-deadlock text. Changes 2 SHOULDs to MUSTs to line up with recovery.\nWe don't provide examples of the weird cases where you need to send Handshake packets to prevent amplification deadlock. The strangest one is where the client has all Initial data (and therefore has Handshake keys), but never receives a Handshake packet from the server. In this case, the client needs to send a Handshake packet that won't contain ACK or CRYPTO frames. We should explain this in a note. See .\nI think the logic in the recovery draft misses some of these cases entirely. The rule in draft 23 is: but in your example case there is not necessarily any unacknowledged data to transmit in a probe, and a is forbidden at Handshake level, leaving no reasonable options to compose an ACK-eliciting packet.\na few suggestions"} {"_id":"q-en-quicwg-base-drafts-3c0c266b6fc6c0226cbe82a04f321499f865b00d68ed0e939c16b5bfca5d59b8","text":"Currently, the \"Ignoring Loss of Undecryptable Packets\" section indicates that some loss can be ignored, but now that we have multiple packet number spaces, I think we can constrain the set of ignorable losses to those below the smallest acknowledged packet in a packet number space. Ignoring the loss of a subsequently sent packet was not the intent.\nI might have to put in a lint for HTAB..."} {"_id":"q-en-quicwg-base-drafts-5de3e347aa6edd3fd881772788efcd678919dbd918c711ff0fd84694c05e3d6d","text":"It might pay to reconcile: {{vars-of-interest}}) to be larger than the congestion window, unless the packet is sent on a PTO timer expiration; see {{pto}}. And now exceeds the recently reduced congestion window. Now, you might implement the latter using PTO machinery, but that text doesn't really suggest that in any way.\nWhere does this second sentence come from? I'm pretty sure I haven't implemented anything like this in quic-go. Should I?\nThat sentence is the the definition of the . We haven't implemented that either, and I was wondering if it truly helps.\nFor TCP this second case if important because as long as you don't retransmit the lost packet you will not get any ACK that acknowledges new data. I guess we don't need that for QUIC.\nThe second sentence was added later in , and the MUST wasn't updated, so I'll fix that. Chromium currently implements PRR(RFC6937), which is a more complex variant of this which is designed to keep the ACK clock going. I thought we discussed adding an informative reference to PRR when we added this text, and I'm not sure why we didn't.\nseems to be the wrong issue you are pointed to...\nGood catch, it's\nI've suggested some symbolic references."} {"_id":"q-en-quicwg-base-drafts-cc3a1d4dc0306eddfb0e87da067fb7bec869d01bd3fce90dd5df92fe402b7413","text":"Having non-overlapping conditions makes sense; adjusted wording.\nThe recovery draft : This is consistent with the pseudocode, in particular . When the server receives the client's final handshake flight, it discards handshake keys and hence cannot acknowledge it. When the client receives the resulting , the handshake is confirmed, but no Handshake or 1-RTT acknowledgements have necessarily been received. This results in a PTO being set even though the client's address has certainly been verified and there are no ack-eliciting packets in flight. As written, the pseudocode would in fact set a PTO for the Initial space at a garbage time. I think this could be corrected by appending \"and has not discarded handshake keys\" or \"and has not confirmed the handshake\" to the language, and adding the analogous clause to the conditions checked in the final statement of .\nI thought that was implied, but implications are bad, so I agree it's best to explicitly state it. I'm happy to review a PR if you have time to write one, or I'll fix this fairly soon ideally.\nWhen a reverse proxy forwards a HTTP response to the client, sometimes it sees the upstream server terminate the response early. For example, a HTTP/1.1 connection being closed prior to transmitting number of bytes specified by the field. Or a chunked response being closed abruptly. The only way of delegating such a condition to the client in HTTP/2 was to send a RSTSTREAM frame with . However, use of the error code is awkward in this case, considering the fact that the error did not happen within the communication between the endpoints using the particular HTTP/2 connection. I think it makes sense to have an error code that better designates the case. One way to address the issue in H3 would be to change to , and let the server use the error code (as an argument of the RESET_STREAM frame) to indicate that the response has been \"cancelled\". Originally proposed in URL\nWhat if you do not want to admit to being a proxy? (who does?)\nHTTPMESSAGECANCELLED would simply mean that the transfer of the response is cancelled, regardless of the server being an origin or a reverse proxy.\nI can get behind this. As long as we're not looking for a byte-by-byte reproduction of the bytes the intermediary received (as was once requested and rejected, I think at the Seattle interim), then it makes sense to signal that the message was killed.\nThis is basically an expansion of the CANCELLED/REJECTED codes to allow a server to do either. WFM.\nNAME Yes. And thank you for the PR."} {"_id":"q-en-quicwg-base-drafts-4d9d2b9ee9f73e0596362f1b17266acedfb651d6d082e94275ff2757cdf71832","text":"Adds editorial text about why a receiver could decide to send ACKs less frequently than every two packets. This keeps coming up(ie: ), so I thought text was necessary.\nAs currently specified, the default QUIC ACK policy models TCP’s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe’ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20–27, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry"} {"_id":"q-en-quicwg-base-drafts-31ebc39f33f5c18c951aca6bcab23bb8428e7bde1a68c57c9915f9f7db90e1a9","text":"If you drop the first paragraph, this is just an editorial fix as far as I'm concerned. That is, not sending a frame that was previously sent is not a change. And - assuming that you dump it - \"immediately\" has no defined meaning anyway.\nNAME I'm happy to drop the first paragraph, and I'm glad to hear that it's editorial. Two people on our team took the existing text to mean that we had to send RCID after receiving NCID even though we'd previously retired that sequence number. Update coming soon.\nNAME Feel free to change to \"editorial\"\nThis is a small point, but it's pointlessly creating pain in our implementation and possibly some others. In Sec 19.15 (NEWCONNECTIONID frame): IMO this is poorly formulated. If the receiver has already sent an RCID frame for the sequence number and that RCID has been acked, it may have thrown away state for that sequence number and shouldn't have to resend a frame the peer has received. In particular, a buggy or malicious peer might send a very old sequence number, or the NCID receiver might have preemptively retired a sequence number it hadn't seen when it got the retire-prior-to field. Simply appending 'if has not already done so' to the text above would do the trick I think. I'm happy to file a PR unless most people think this is a terrible idea.\nAs a related point, it would be useful to say that an NCID receiver SHOULD NOT immediately retire old CIDs solely as the result of the NCID frame, except when instructed by Retire-Prior-to. This can lead to infinite loops if the peer tries to keep a specific number of CIDs available.\nI think this would be a nice clarification. I think we already prohibit such behavior through the use of activeconnectionidlimit TP. Section 5.1.1 states: endpoints store received connection IDs for future use and advertise the number of connection IDs they are willing to store with the activeconnectionidlimit transport parameter. To paraphrase, by using the TP, an endpoint willing to retain N active connection IDs prohibits the peer from sending more than N.\nI was probably unclear. For a while, our implementation was doing something dumb: each time it got a new CID, it started using it and retired all previous ones regardless of Retire Prior To. This created infinite loops where the client was trying to make sure that we always had two CIDs. We should probably warn against that. Please see if the text in the PR is clear enough.\nTaking a look at the PR, I'm finding the proposed text to be a bit unclear, too. Based on the discussion on the list, this seems to be a misreading of the text itself. From the list: I don't believe that this is actually what the text says. The text says: This to me reads as \"If you said everything below CID 10 is done, and then you send me CID 8, I must immediately send a RCID frame for 8 since I can't use it\". I don't believe that there is any CID-specific state required at all for this. You do need to store the current highest value that you've ever seen for RPT, which is a single integer, I'll call it . Beyond that, when you get a NCID frame, you check the sequence number. If it's less than , you should immediately enqueue (which I think might be a reasonable wording change to make, since immediately seems to have caused concern when the intent was that you put it on the tracks leaving the station, not that you force the train out the door before it's allowed) a RCID frame for that sequence number. Note that a RCID frame contains a single value, the sequence number of the connection ID being retired, which incidentally is contained by the NCID frame you're responding to. We can argue as to whether or not you should bother to send RCID again when the same frame has been acked, but (a) storing that is actually more state and (b) we're working pretty hard to avoid tracking ACKs of packets. I might be missing something here -- can you help me understand the concern about storing state in order to meet the requirements of this text?\nYes, I concede that the original email was garbled. I read the original text the same way you do. The issue is that if I decide to retire immediately without actually having the sequence number, that should be sufficient. Making me send it again after the NCID is gratuitous, and in at least one implementation creates state solely to track this corner case.\nAh cool, that's excellent then! :) What's the state that gets created to track that case? If you retired them all upon getting RPT without checking to see if you had them yet, don't you still have to keep track of the RPT value or which ones you've already retired? I'm probably missing some other piece of state in this case...\nI think what an endpoint is required to track are: active connection IDs sequence number of connection IDs that are being retired (but are yet to be acknowledged) When receiving a NCID frame with RPT, an endpoint checks if there are any active CIDs below the RPT, and retires them. I do not think an endpoint has to track RPT, though admittedly, I haven't implement them (yet).\nNAME what you suggest would certainly work. However, a malicious peer could send an infinite number of NCIDs as long as it kept advancing rpt and didn't ack RCIDs, spiraling the state upwards. To avoid this, we keep a window of unacked but retired sequence numbers and construct RCID from that. That's why having an NCID come in well below the window results in additional state for us.\nNAME Exactly. We have an issue for that problem: . Agreeing on what to do to resolve the unbounded state problem might help resolving this issue too.\nTaking this as editorial based on the proposal (and promised tweaks).\nLooks fine to me, though I do think it's better with some kind of language to send it sooner rather than later."} {"_id":"q-en-quicwg-base-drafts-974e92e605999deb6789e2d0423d7047e86d198ec92ae9af92c3109c8b7d40ec","text":"This is option 4 from the discussion in . That is, making preferredaddress equivalent to NEWCONNECTION_ID with a sequence number of 1 in every way that matters.\nOffshoot from review. Servers behind a load balancer, and the load balancer isn't CID-aware. Each server gives out an IP:port per client in the parameter. If the client moves to the SPA, then future migration is fine -- it will continue to talk to the same server using the preferred address. If the client doesn't move, those future migrations will break. There's currently no way to set to \"only if you don't use my preferred address\". Similarly, if the primary connection is to an anycast address, if the client fails to move to the preferred address, subsequent migration will likely break. Should we blanket recommend that clients not actively migrate if they ignore / are unsuccessful in using the SPA? Or should there be a way for the server to indicate that?\nGood question. This brings up a related question for me about disableactivemigration. Given a server can decide not to give out any additional connection IDs, doesn't that give it a mechanism for disabling active migration? Ironically, given recent decisions on connection CIDs and SPA, if an SPA is supplied, then the client always has a second connection ID it can use on any path, not just the SPA. So my suggestion of not giving out extra CIDs fails for the original path, but not the SPA.\nWhat we’ve said for this so far is that SPA is limited to offering a minimum of a single additional CID that needs to work in both places, the original server and the preferred server. As Ian says, the original server should not issue the client any more CIDs (if you have some that are deployment specific) until after they’ve successfully migrated to the new endpoint, at which point the preferred server can handle that issuance. And as he notes, you still have to deal with the fact that there are two CIDs that the client could use, from any incoming address. It seems as though the full way to solve this is to define potentially disjoint sets of TPs that different servers could set to match their particular requirements (around more than just migration). In that world, if you had an initial server that couldn’t handle migration and a preferred server that did handle migration, you could set different TPs to indicate that. It goes well beyond that though, any other TP might have a different values for the original server and then have those be different with the preferred server. But that does feel an awful lot like something that goes with full server migration. In the limited form that we have now, if you want to be able to push someone off to a different server, there are some limitations that you need to deal with, like being able to set up enough of a shared configuration for the two that the same TPs work with both. In practice, I think this means we end up with the following situation: The client either moves to the requested address or it doesn’t. If it does, then you’re in good shape, in theory you gave it TPs that matched what the server wanted and everybody is happy. If it doesn’t, then you’re unhappy, since in theory you had a reason you asked it to move in the first place. You might choose to close it down more aggressively than you would on the preferred server to save resources, or give it smaller limits for flow control, etc. When you’re in the “client didn’t move” state: If the client then tries to use the original 2nd CID that you gave, from the same IP address/port, then a non-CID aware load balancer should be fine. If the client then tries to use the original 2nd CID that you gave, from a different IP address/port, then a non-CID aware load balancer will not be happy. Of course, the problematic case is even narrower, since if the client is probing the new path they’ll see it as though there isn’t any connectivity, just like they will if there’s anything else on the new path that prevents their connection from working — this is always a risk when migrating, so they should be unsurprised and stick with their current path. If they’re forced to move for whatever reason, then it will be just as though they didn’t have connectivity over the new path (which they don’t) and whether or not you disable migration you’ll see the same outcome — the connection is closed. Interestingly, I’m not actually seeing a difference from the client’s perspective in terms of outcome whether you let them try with that CID and fail or if you told them to not try.\nIn very much shorter terms: isn’t this a situation that all clients already need to be able to handle for other reasons?\nNAME I do not like the first option, because IMO banning migration on the original address is going to be a premature optimization based on what we have now, has negative effect on what we would have in the future. My assumption is that the majority of the server deployments would be CID-aware, especially in the long term, especially in terms of traffic volume. Note that the volume of traffic is more important than the number of server deployments, as migration is about (improving) end-user experience. IMO there are two ways forward: (a) State that disableactivemigration TP only applies to the original address. Per-definition, server-preferred address is an address that is capable of receiving packets from any address. There is no need to prohibit clients doing migration when connected to SPA. (b) Do nothing. Clients are RECOMMENDED to implement SPA, and this issue is about a small fraction of deployments that rely on non-CID-aware load balancers. We can ignore the problem.\nIIRC, we had that conversation when deciding whether to add in the first place. The short version is that taking this approach prevents the client from rotating CIDs on the primary path; if we want the client to be able to rotate CIDs but not migrate, they need to be distinct signals. NAME in a way you're correct that clients will have to deal with the fallout regardless of what the server says. However, the actual use of is likely to trigger a little more than just \"don't do that\" -- a client might PING more actively to prevent NAT rebindings, for example. I think it's this hinting aspect that's actually more valuable, since as you note, clients need to be prepared for attempted migrations to fail even with servers that tacitly support them. I don't think this is necessarily true, though I agree that we're talking about smaller and smaller fractions of server space.\nNAME While I agree that it might not necessarily be true at the moment, I might argue that it is going to be true. The basis of the resolution for and (both on consensus call) is that servers using preferred address would not be using the server's address for distinguishing between connections. We also know that a server cannot use client's address (unless there is out-of-band knowledge), because clients are not required to use the same 2-tuple when migrating to a different address (and also because there are NATs). The outcome is that the server has to rely on CID when using SPA. And therefore I might insist that servers using SPA would always be capable of handling further migration.\nNAME So your suggestion is that disableactivemigration applies only to the original address and the SPA must always support active migration, because it has to use CID routing? I don't think that'd be a problem for our use cases, but I'm not sure about others.\nNAME Exactly. I think that that would be the correct thing to do, if we are to make changes. I agree that existing clients might be applying after migrating to SPA, and that they do not want change. But they can remain as they are, because the change in spec does not lead to interoperability issues. To paraphrase, I think making the aforementioned change is strictly better than doing nothing, because it would be an improvement to some, while not hurting anybody.\nA truth table might help here. ... --- no preferred address preferred address Right now, our definition of disableactivemigration is a simple \"don't migrate\". But the argument here says that the server that provides a preferred address is probably capable of handling migrations on the basis that it has to support one. We might insist on the source address not change, but the fact is that some NATs (those with address-dependent mappings) will change the source address. Changing to a definition where disableactivemigration is instead \"don't move to a new address unless you use a preferred address\" is a little more complicated, but doable. Is it worth the extra complexity this late in the process? Is that complexity justified when it would be trivially possible to define another extension that said \"I support migration on my preferred address\"?\nI'd like to minimize changes, but I believe the status quo is that disableactivemigration applies to both the original address and SPA. As such, all suggestions above seem like a change. There is a mechanism to disable migration on the SPA, which is don't give out any more CIDs once they migrate. I don't expect any hosts to support migration on the original path and not the SPA, so I don't think that's a use case worth supporting(or encouraging).\nMy proposal is that we defer the ability to signal \"I support migration, but only on my preferred address\" to an extension.\nNAME And leave the current behavior of disableactivemigration applies to both the original address and SPA? I'm a bit concerned that makes SPA somewhat useless. Is this issue a candidate for parking and we can come back to and see if we can live with the status quo?\nWhich, again, not only disables migration, it disables the ability to rotate CIDs. That's not something we want servers to be doing. NAME argument is convincing that it's improbable for something which supports SPA to not be able to uniquely identify connections (whether by CID or because the SPA endpoint is unique). Therefore, I'd support scoping disableactivemigration to the handshake address, and defer the ability to disable it on the SPA to an extension. Loosening this definition doesn't make any current clients break, so I suspect it's bearable despite the schedule.\nIf we're going to make a change, I support NAME and NAME direction of scoping disableactivemigration to the handshake address. And I agree that I don't expect this breaks many existing clients.\nI personally find the proposed change confusing. I'm not sure I find the use-case of \"my original address doesn't support migration but my preferred address does\" compelling. Since this can be easily solved by an extension with no downside over it being in the core docs, I don't see a reason to change the spec at this point in the process.\nAs this has sat here a while, I'll reiterate my proposal again: we defer the ability to signal \"I support migration, but only on my preferred address\" to an extension. Which means that disableactivemigration applies to both the original address and SPA, so answer the question (sorry Ian, I missed that). I realize that you could change the meaning of disableactivemigration to apply to the address the server uses in the handshake, but there are interesting corner cases and I don't think we need to concern ourselves with those. For instance, if you connect over V4 and get disableactivemigration and serverpreferredaddress. If that SPA includes a V6 address you can't use, along with the same V4 address, what then? Does the server really support migration? I know that the current situation is unappealing, and I think that disableactivemigration is not good for the protocol. But it exists for very good reasons. Trying to limit the scope is admirable, but as we haven't seen a concrete proposal, I think we need to move on.\nNAME If this example is concerning, I'd argue that is an issue that already exists, orthogonal to what we are discussing here. I am still not convinced that limiting the scope of disableactivemigration to just the original address is going to cause any new issue. That said, if nobody needs this change, I'm more than happy to move on with status-quo.\nI've thought about this, and I don't believe we need this change, but I also think the change NAME suggested makes the current behavior better overall. A server that supports migration on the SPA(ie: unicast) and not on the handshake address(ie: anycast) will end up not disabling migration and hoping that most clients migrate to the SPA or don't support migration of any sort. That seems like a plausible hope, though I'm a bit leery of hope based protocol design.\nThinking about this more, a statement like: \"Migration is never guaranteed to succeed. For servers that advertise a preferred address, it's expected migration is more likely to be successful when using the preferred address than the handshake address.\"\ntl;dr: I'm arguing for \"do nothing\". This conversation has made it clear to me that an address, and not a server, is the thing that supports client migration. Which makes sense -- migration support at the server is entirely a question of routability, and it makes sense that it would be tied to the address in question and not the endpoint. So if we were to do anything, I think I would propose: scoping disableactivemigration to the handshake address, and adding two disableactivemigration flags to the preferredaddress TP, for the two addresses in the TP. That said, this is a clean enough extension. A new extension can define a new TP that provides this granular interpretation of the three server addresses. A server that wishes to support this can send both the SPA and this newSPA TPs, with the SPA TP (with the current scoping of disableactivemigration applying to all server addresses) providing cover for legacy clients. Also I expect that clients are likely to switch to using the SPA if provided. A server that has different routability properties for the two addresses can wait for some deployment and make decisions on whether to risk allowing migration based on data of what it sees as client behavior with SPA. I would argue for doing nothing now. Experience will teach us the right way forward. If necessary, this extension will be easy enough to do, and importantly, the current interpretation of disableactive_migration gives us the right kind of cover in the meantime.\nI would rather not make the change in . It assumes that migration is more likely to succeed when attempted to the preferred address as opposed to the handshake address. While this assumption might hold in some server deployments, it is not a fact. I'd rather we didn't bake this assumption into the core spec. I'd prefer to see this feature as an extension.\nNAME Would you mind elaborating why a server, accepting connections on a preferred address, might not be able to disambiguate connections using CIDs? As stated previously, when a client migrates to the preferred address, the client's address port tuple can change due to the existence of NATs. A server using preferred address need to take that into consideration. That means that a server has to disambiguate connections based on the CID, which in turn means that the server would be possible to support migration.\nHmmm thanks for that note NAME Looking at things under that light, I agree that the assumption I described above is already part of the protocol. Please disregard my comment above URL .\nWe don't prohibit the use of a zero-length connection ID in the preferredaddress transport parameter. So a server can use a connection ID during connection establishment, then remove privacy protections by moving to a unique address (see ). Of course, this is largely moot, as the privacy protection offered through the use of connection IDs toward a unique address is nil. In either case, we should clarify whether preferredaddress is exactly like NEWCONNECTIONID, or whether it has its own rules. Right now, it looks like it has its own rules and that is likely bad.\nZero-length connection IDs don't have a stateless reset token associated with them, but the preferred_address TP requires you to set a token. That's an awkward corner case.\nThat is just one of the strange things, yeah. We should clarify that.\nSo I see a number of ways forward here: Do nothing. Leave this ambiguous and potentially special. Make the fields in preferredaddress exactly equivalent to a NEWCONNECTIONID with a sequence number of 1. But allow the connection ID to be zero-length. A zero-length connection ID provides a new stateless reset token that is bound to the new address. (This is a weird option that would go against previous decisions to eliminate special handling for this connection ID, but I include it for the sake of completeness.) As 2, except that the new stateless reset token has connection-wide scope such that either of the provided tokens can be used. This is more consistent with established principles. Prohibit the use of zero-length connection IDs in preferredaddress. That is, you can't use preferredaddress without a connection ID. That is, preferredaddress effectively is a new server address plus a NEWCONNECTIONID frame. Something else. From my perspective, I rank these as 4 > 3 >> 1 > 2. But that is largely based on my bias against zero-length connection IDs (as much as it seems like that might be good for privacy, I think that privacy is better served by the use of connection IDs). If people intend to use preferred_address to migrate to a connection-specific address, then I'm OK with 3; we already have to cover the privacy hole that arises from that.\nMy preference goes to option 4 too. I do not think servers would use both 0-byte CID and preferredaddress TP at the same time. The only case a server using 0-byte CID can also use preferredaddress is: when the server is never accepting more than one connection at a time, or when the server knows that the client's address tuple does not change when the client migrates to a new address IMO that's a niche. And when these cases can be met, I do not see why the client cannot know the preferred address of the server beforehand.\nI forgot to add: There is (maybe) a case for using a connection ID during setup and a zero-length connection ID after migrating to a preferred address. That isn't entirely crazy, but I personally would be comfortable losing that option. But I don't operate a server, so I want to give those who do a chance to let me know that this is preemptive.\nNAME My personal take would be that people would not do that. When a client migrates to the preferred address, there's fair chance that the client's address tuple will change. That means that the server should be capable of receiving short header packets with 0-byte CID from any address, and determine that the packet belongs to the connection that is migrating to the preferred address. That's possible theoretically, but in practice ... I agree that a server might want to switch to zero-byte CID after migration to SPA completes. But to do that, we need a NEWCONNECTIONID that can carry a zero-byte CID, which is forbidden at the moment. I do not think we'd want to change the spec now to allow that.\nThat's an excellent point, and a good reason not to allow changing to zero-length connection IDs in combination with preferred_address. The nasty part of that is that in most cases that design would work because clients won't use the \"new\" connection ID until after the handshake is complete. Of course, it would fail spectacularly if a client did decide to switch earlier - and the client might attract the blame, where it is clearly the fault of the server.\nI'll note that prohibiting a 0 length CID doesn't actually ensure the server's IP isn't unique, though it does remove an incentive to make it unique. I'd like to leave the option to switch to a zero-length CID later in the connection, and I think I'd be fine with 4 if we allowed that. An easy example is a large upload. I mostly don't care about the CID, but if I'm uploading a 10GB file, maybe saving some bytes matters. Also, if you're constantly filling the pipe, NAT rebinds are extremely unlikely. Another example is an environment like a datacenter where there are no NATs and routes are very stable, unless a peer intentionally changes it, so the 5-tuple is sufficient. In that case, the ideal operating condition is to typically use a 0 byte CID and switch to a non-0 length anytime a peer changes IP or port. Changing port has some clear benefits in some cases, so it's in no way a threoretical use case. NAME may have thoughts on this range of use case as well.\nIn our implementation, we plan to do all routing based purely on the CID. IMO, the couple of byte overhead in a 1500 MTU (at least; in datacenter) packet isn't going to make much difference, so it's just simpler to have a common CID routing scheme. That being said, I personally am not against allowing for others to do something different. I've never understood why we make any restrictions on changing between 0-length and non-0-length CIDs at all. IMO, the complexity is that it can change length at all, not that it's zero or not.\nNAME we already prohibit changing from non-zero-length to zero-length using NEWCONNECTIONID. I don't want to relitigate that. I want to concentrate on this specific issue.\nIf we're not going to re-litigate that, then I'd suggest we continue allowing a 0-length CID for the preferred address case.\nNAME While I would not argue strongly against continue allowing that, I would appreciate it if you could clarify why allowing a zero-byte CID would be useful. As pointed out in URL, when you provide a zero-byte CID in SPA, you need to be prepared for receiving a short header packet using zero-length CID from any IP address, and be capable of associating that to an existing connection. This is because the client (or the network) might change the client's address tuple when it sends packets to the preferred address. I would anticipate that you wouldn't want to do that in your file upload case (or in any deployment that might accept multiple connections concurrently).\nNAME If I know 5-tuples are stable, then using a zero-length CID is always fine, just like it's typically fine in the server to client direction. But in that case, hopefully I can avoid needing to use SPA, making the decision on this issue moot. After thinking about this more, we're probably better off with option 4 for the sake of consistency and if we want to allow 0-length CIDs in more cases, I can write an extension for how that would work, since I don't think it's only a matter of relaxing some constraints.\nThanks all. I think that we reached the same conclusion in the end, via different paths. The proposed fix here is firmly design chairs. A bugfix only, but still worth careful review.\nApologies for getting to this late, but I'm not sure I agree that we should disallow this. Here's a use-case that I think makes sense: the client wants to upload a very large file to the server over HTTP/3 and it has a small MTU, making it particularly sensitive to per-packet overheads. Let's also assume that we are in the future and the server supports IPv6 and has its own /64 (shocking!). In this scenario, a good way to reduce per-packet overhead is to operate the server on a given address, and then switch each client to their own unique server IP using preferred address, and then use zero-length CIDs. I've always found this use-case a good use of preferred address, and I would rather we didn't ban it without a good reason (and making the design cleaner doesn't necessarily count as a good reason at this stage in the process).\nThat is an example we discussed. As Kazuho noted, that doesn't work as we have already agreed that any connection ID (in this case, a zero-length one) can be used prior to handshake completion. That's why we ended up on 4. As Ian noted, this isn't ideal, but this is an area that is ripe for extensions that might allow this sort of thing.\nI don't understand what you meant by the following: Can you clarify? I was under the impression that the client switches to the preferred address after the handshake is done.\nYes, you can't migrate (act on the address information) before the handshake is confirmed. However, the connection ID is not special; it can be treated just like any other connection ID you learn, so you can act on the connection ID information before you confirm the handshake. So that means that a server that opts to include a zero-length connection ID in the preferred_address might receive a packet with the zero-length connection ID at the old address. That might not work out.\nWhat's the point of including a connection ID (and stateless reset token) in the transport parameter if that CID isn't tied to the preferred address in any way? At that point might as well remove that CID entirely and rely on NEWCONNECTIONID frames. My understanding was that the reason for there being a CID in the TP was specifically to make that CID special. Would it make sense to just say that this CID MUST NOT be used when sending to the handshake server IP?\nthe reason for having a connection ID in the transport parameter is only so that the client has a connection ID to use for the new path\nNAME I agree with your intuition, but I think we discussed this in Zurich and ended up concluding the preferredaddress CIDs weren't special. This was in issue and also came up in (which I realize hasn't gone out for a consensus call yet, that PR seems stalled?). In the upload example, the server needs to know that the client is going to send a lot of data, which it might know via SNI, or it might not. I tend to think the more annoying restriction is that you can't change from a non-0 length CID to a 0-length one and vice versa. By supplying a 0-length CID later in the connection along with other CIDs, the client could choose whether it wanted to use a 0-length CID at any time. Thinking out loud, an easier solution may be a TP like 'acceptemptyconnectionid' that indicates at anytime, the peer can switch to a 0-length CID if it wants. It could either think the 5-tuple is stable or its happy re-establishing the path with a non 0-length CID after a NAT rebind. In this world, the Connection ID is more of a path re-establishment token.\nThanks for the details NAME and NAME I get a sense I might be trying to re-litigate past consensus without providing new information, so I'll back down. I do think that the ghost of hypothetical linkability has caused us to make some very odd design choices, but this isn't new to this issue/PR. In the end, this is the kind of feature that can be enabled by extensions so I can live with us further restricting the base spec for consistency."} {"_id":"q-en-quicwg-base-drafts-e3aa91bedf32a0efbf53907701cf3e2879f61c214823cfc6677f196d0bffbf55","text":"Now that we require a minimum packet size, this is no longer true.\nIs this no longer true for the algorithms defined in the spec, or no longer true for any packet protection algorithms for any AEAD? It might be worth moving the note to the end of 5.4.1 instead."} {"_id":"q-en-quicwg-base-drafts-b5950ce9feda7601e5a5323e7baccebcb1afc052a22394341cce11f1e0f694fd","text":"The endpoint sends frames, which carry -- not send -- identifier values.\nThe way it's intended to parse, \"An endpoint ... MUST NOT increase the value they send,\" is probably incorrect in number or gender. But yes, changing the verb to reflect that the subject is the frames instead of the endpoint is also an option.\nOK. I have no preference how this sentence is fixed -- as long as it's fixed.\nThis looks good. While we are at this paragraph, why say \"client?\" GOAWAY can be sent by either side now.\nIt can, but a server isn't going to retry push streams on a new connection; it's going to decide de novo whether to push there. Arguably, nothing breaks if the client increases the Push ID on the server after the fact.\nBTW, thanks for catching this. I appreciate how many of these ambiguities you're winkling out.\nNAME NAME OK, lgtm, then.\nMaybe, given the ambiguity, the right answer is to get rid of \"they\" in the first place?"} {"_id":"q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06","text":"This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP’s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe’ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20–27, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry"} {"_id":"q-en-quicwg-base-drafts-c67897974a9ff5b36b3c476fa35d2614a4008dca706fcfaa08445fe829ca6bb2","text":"The text in 2.1 is more complete, so we don't need this.\nWhen I read 3.0 I was a bit confused, because when the discussion of streams takes place in section 2.1, the requirement on stream ID ordering is already mentioned, but the reminder of what the types are is fresh in the mind. Perhaps this should be revised."} {"_id":"q-en-quicwg-base-drafts-187639b153ea0de606ec710b58e172435b859ce1c94354bae259d4c56ae66928","text":"In early versions of path validation, there was a requirement that PATH_CHALLENGE be sent on the same path. This is, for various reasons, either unnecessary or harmful, so we no longer have that requirement. But there was some residual text that assumed that property. This removes a paragraph citing symmetry directly, and rewords text that assumes return reachability - a property that QUIC no longer provides mechanisms for.\nIn , we removed the requirement that PATHRESPONSE be sent/received on the same path that PATHCHALLENGE was sent. However, there is still text in the document that relies on this assumption. In 8.5, it explains that receipt of the PATHRESPONSE on another interface doesn't cause validation to fail, because \"It is possible that a valid PATHRESPONSE might be received in the future.\" But receipt of the PATHRESPONSE, over any interface is valid, and causes the validation to succeed. In 9.2, it suggests using path validation to demonstrate return reachability, but since PATHRESPONSE need not be sent on the same path, path validation doesn't demonstrate that.\nNote that sending the response on the old path can lead to failures in \"break before make\" scenarios. Shall we mention that?"} {"_id":"q-en-quicwg-base-drafts-01935d39a9f33491832e8e566bba4565c3f4dcda2adb89705ce0006da5fb67a3","text":"These two statements are - to the extent that I can best determine - redundant. One is better.\nThe first two sentences of QUIC-TLS, Section 4.1.4 clearly intend to say different things, but I can't figure out the distinction. Is there a way we could reword them to make them clearer?\nI think that this is not separately at all. Let's reduce that to a single statement."} {"_id":"q-en-quicwg-base-drafts-ad607b3db43cf70374f379ebfd498f560b1ec0408fcd925c82183b5bea5eb943","text":"We weren't very concrete in saying how endpoints generate packets with CONNECTIONCLOSE, particularly those that throw away keys. We have a rather vague requirement. In short, follow the same rules we established for the handshake. Wording this as an aggregate number allows for stochastic reactions and larger CONNECTIONCLOSE frames. This way, if you get a 25-byte packet and respond with a 200-byte packet, you can do that, but you have to respond to 3 in 8 or fewer in that way. Note that this limit only applies if the endpoint throws away decryption keys. Endpoints with keys aren't blind amplifiers.\nclosing period ({{draining}}) and send a packet containing a CONNECTIONCLOSE in response to any UDP datagram that is received. However, an endpoint without the packet protection keys cannot identify and discard invalid packets. To avoid creating an unwitting amplification attack, such endpoints MUST reduce the frequency with which it sends packets containing a CONNECTIONCLOSE frame. To This seems like a pretty vague requirement for a 2119 MUST. Like, I could only send it 9999/10000 packets and that would be compliant.\nI think that it would be good to follow the 3x rule here also, but that's a non-editorial change.\nDo folks agree the PR resolves this?\nI think the PR resolves this and is a sensible choice.\nYup, the PR resolves this.\nOK, labelling as such."} {"_id":"q-en-quicwg-base-drafts-d4ac070ca418107f72a3d5a74096e23b8bf7ed967369f9e95d49e79f60541086","text":"This uses positive framing for the recommendation to use frames rather than packets. And it makes that recommendation directly, rather than implying it.\nHere are three quasi-nits about S 12.2. Handshake, 1-RTT; see Section 4.1.4 of {{QUIC-TLS}}) makes it more likely the receiver will be able to process all the packets in a single pass. A packet with a short header does not include a length, so it can only be the last packet included in a UDP datagram. An endpoint SHOULD NOT coalesce multiple packets at the same encryption level. Is the implication here that you should instead pack all the frames into one packet? UDP datagram. Senders MUST NOT coalesce QUIC packets for different connections into a single UDP datagram. Receivers SHOULD ignore any subsequent packets with a different Destination Connection ID than the first packet in the datagram. Is there a valid reason to have different SCIDs? ({{packet-version}}), and packets with a short header ({{short-header}}) do not contain a Length field and so cannot be followed by other packets in the same UDP datagram. Note also that there is no situation where a Retry or Version Negotiation packet is coalesced with another packet. Should this be a MUST NOT?\nYes. See . I don't see why. It's just a fact, not a conformance requirement.\nFor (1) I think it would be helpful to say that.\nLGTM, one suggestion."} {"_id":"q-en-quicwg-base-drafts-daf433835af87430c1717cc5383a0c548458b58cacf51f22373e144054c59cea","text":"This should resolve some parts of .\nNAME If you wish, I will split this PR into small ones.\nI have dropped the commit of \"in flight\".\nI also changed into .\nNAME The commit for \"towards\" has been dropped.\nThanks. BTW, that thing is an another odd English quirk.\nUnfortunately, this PR bundles a bunch of unrelated changes. Some of them I like, some of them I don't.Thanks for the PR, NAME !"} {"_id":"q-en-quicwg-base-drafts-622e9efe21197ee469308b1e830278fdc5abe1bf8c4696f71bdb9983ab3197b9","text":"Added changes to recovery as well.\nBy design, TCP is capable of handling late-arriving acks. If a TCP endpoint receives an ack for a byte range after declaring loss of that byte range, retransmission of that byte range is cancelled. In contrast, QUIC uses monotonically increasing packet numbers. That is definitely an improvement, but does have a downside. If loss detection is too sensitive, and if endpoints drop information regarding what had been sent in a packet at the moment that packet is declared lost, then it would not recognize the late acks. I think that we should suggest (even if not recommend) QUIC endpoints to recognize late-arriving acks, so that our loss detection can be as aggressive and as effective as TCP, if not better. (Please let me know if such text already exists. I am opening the issue because I could not find one)\nI have to agree. We had to implement late acknowledgment by marking \"lost\" packets and retaining them. I believe that this was very helpful in reducing the amount of redundant information we send. Failing to retain packets that were marked as lost was also the cause of a bug. As the lost packet was removed from our outstanding calculation (a different value to the \"in-flight\" counter we use for congestion control), we believed that there were no packets outstanding and we didn't try to send anything when the next PTO fired. As the loss detection timer is shorter than the PTO timer, we believed that there was nothing to send often (and we don't check all sources of frames, which would have told us otherwise).\nAgreed.\nYeah, we should have text on this if we don't already.\nThe resolution in introduces a new recommendation for senders, so this ought to be considered a design issue if we're happy with that resolution.\nURL we have this text:\nNAME Thank you for pointing that out. I missed that. I tend to think that we should adopt . As stated in my original comment, handling of late acks is the prerequisite of many of the normative requirements that exist in the recovery draft. Therefore, it is better to have a normative requirement for handling of late acks.\nI'm not opposing adding new normative text. If new text is added, I think that the existing text should be aligned to it. In the PR, it recommends 3PTO, but the recovery draft A.1 says 1RTT.\nFWIW, our implementation, as of a little while ago, tracks for the current PTO. If everything is working nicely, that limits our commitment. As we are sending probes, the time lengthens so that we send a probe before discarding state. If it takes two probes to get an ACK, we won't recognize that, but we will always catch an ACK for the last set of probes. And the state commitment is minimal unless we're forced to probe.\nI'll note that the PR is against transport, but this issue is tagged as recovery.\nThanks for pointing out this inconsistency, NAME I would argue that this should be a PTO, since that allows for including the RTT variance in the waiting period. Using a PTO includes the maximum ack delay as well -- which may not be necessary -- but it's convenient to use the PTO period.\nI wanted to clarify my comment on the PR about believing that 1 RTT is a preferable recommendation to 3*PTO. We definitely don't want to discard this information BEFORE the packet is lost or acknowledged. But after it's been declared lost, there's no need to wait longer than the reordering window to receive an acknowledgement, because an immediate ACK will be sent when the out of order packet arrives. I tend to think an RTT is a plenty long reordering window in practice. If nothing is inflight before and nothing sent after the packet is lost AND the late immediate ACK for the reordered packet is lost, then it could take longer than the reordering window to receive an ACK, but that seems like an edge case that's not very interesting.\nNAME : I've changed it to 1 PTO. We've commonly used PTO as a time period for everything else in the transport, and so it's convenient to use PTO here too. Ultimately, this is the amount of time you're holding on to packets marked as lost, so the extra time waiting for a PTO (over an RTT) really shouldn't make any measurable difference. If you still think we should say RTT, I'm fine with that.\nI'm fine with PTO, and I prefer PTO above RTT. There could be reasons where there are some packet reordering due to multiple paths being used (e.g., ECMP, clustered endpoint). In such case, the variance of latency becomes more like a constant independent of RTT. I think it is a good idea to recommend a value (i.e. PTO) that can cover such constant even if RTT is very small. PTO has that would cover that constant.\nI'm not convinced I understand the rationale completely, but I'm ok with 1 PTO. I believe we should update the recovery draft in the same PR.\nSo, design?\nWe have text about this in recovery, but it's there was no normative text and it's in the appendix, so I think this is design.\nLet's not thrash too much on this one."} {"_id":"q-en-quicwg-base-drafts-38063a9d307243d18b926239ca8118ea9931d127c458634cb58cd2efeebf7c94","text":"It's no longer normative and it's in a single section. I only moved text so far, and not as far as NAME suggested. I can go farther here or in a follow-up PR.\nNAME I would like to merge this. WDYT?\nCan we merge this soon. I'm worried about it developing conflicts.\nYep, NAME lost the chance to fix the PR. We are - of course - still open for editorial improvements in the form of new pull requests though.\ncalls them \"exemplary\" but 13.2.4 has MUST in it.\nFor reference, this refers to and in the editor's copy. I'm in favour of striking the exemplary note and saying that while the recovery draft is advisory, this document depends on implementations following something like what is recommended.\nThe specific MUSTs aren't really about the ACK algorithm per se, but about state endpoints will have to maintain for other reasons that are typically folded into your loss recovery code. If an implementation is unable to do duplicate detection for a packet number range, it MUST fail toward rejecting all packets, not toward accepting potential duplicates. An implementation must track the greatest packet number ever received, for use in recovering future packet numbers. In fact, the first requirement is discussed in 12.3 and could probably be dropped here. The second isn't actually called out (MUST) in 17.1, but could easily be moved there.\nI think I'm inclined towards reorganizing the text and putting all the 'exemplary' parts into one section(ie: \"Limiting ACK Ranges\") and moving the normative text either into \"Managing ACK Ranges\" or elsewhere in the draft.\nI thought I could clean this up fairly quickly, but I think a fair number of paragraphs need to move, including some of NAME suggestions. I'm happy to try to improve this in a few days if no one else has beaten me to it.\nAnyone want to help NAME out and take this on?\nSorry about dropping this one. I'm fine with it, and happy to make editorials one later.Thanks Ian. When you lay it out like this, it seems so obvious. I checked that we don't have citations for {{ack-tracking}}, so this works nicely. If you care about the \"MAY\", then we can fix that in another PR."} {"_id":"q-en-quicwg-base-drafts-ff2903eb351f796c9b5ff07ce288479a95509744f20afe8bfb46bd00bea263f8","text":"This binds the spin-bit state to a network path, which aligns with intent better than the existing language. It's more words, but it is far less ambiguous.\nNAME sanity check please? I'll merge this soon either way.\nThis is likely a case of \"we know what we meant,\" but.... Section 17.3.1 says that the spin bit can be enabled/disabled on a per-connection or per-CID basis, and that: However, a literal reading of this suggests that a client which sets the spin bit on a per-connection basis will revert the spin bit on the main path to zero whenever it uses a new CID to probe on an alternate path. It also has implications for an implementation that uses many active CIDs at once. One resolution is that the current state of the spin bit is maintained on a per-CID basis, and that the CID begins at zero for each CID. However, for the client using many CIDs at once, that then begs the question of which of the peer's CIDs are being reflected back to it. I think that what we actually mean here is two-fold: The current state of the spin bit is maintained on a per-path basis, and is initialized to zero for each path. The current state of the spin bit is reset to zero whenever the CID being used on a path changes, no matter how frequently that is.\nI think what you summarize in the bullets is indeed what was intended. NAME"} {"_id":"q-en-quicwg-base-drafts-aab428b55bd04e2307f1baf9693247f36cde6bb58b1b09dc20dff74ab3c3994e","text":"This text could be read to imply that an off-path attacker is more capable than an on-path attacker, which is rarely true. What it was meant to point out was that it is easier to move traffic onto a path that you are on. What it fails to acknowledge is that it is also easier to move traffic off a path that you are on. In other words, the treatment of this in 21.12 is more thorough and we don't need to talk about limitations. Mike suggested that there is some duplication between this attack and the more comprehensive analysis in 21.12. That is true, but these serve different purposes. This is to describe attacks and the normative requirements on endpoints necessary to avoid them. The other section is a thorough and hollistic analysis. I couldn't see any truly straightforward changes. That doesn't mean that we won't find a way to clean this up, or that it would be undesirable to have fewer words, but I've not the time for that right now.\nI find the text in 9.3.3 confusing, because it says: Unlike the attack described in {{on-path-spoofing}}, the attacker can ensure that the new path is successfully validated. This is odd because on-path attackers are generally regarded as stronger than on-path attackers. Why can't an on-path attacker complete this attack.\nI think you're right that the text is confusing, if not quite incorrect. An on-path attacker can rewrite a source address, causing an apparent migration, but can't cause path validation to succeed with that source address unless it is actually on-path from both endpoints to the spoofed addresses as well. An off-path attacker which can race packets successfully can cause migration with successful path validation to a path which includes it, becoming a limited on-path attacker. This also feels slightly duplicative of 21.12.3; it might be worth condensing some text between these sections."} {"_id":"q-en-quicwg-base-drafts-79beb17080c83004bd44a36b09671be1d94161885eb4916a50187d82bfc0c93b","text":"I took a careful look at the text here and found a few things: There wasn't much redundancy, but there was a little. Mostly the problem was scattering of content between two locations in the document. The closing and draining states only apply to immediate close. So I moved things up. I trimmed the top-level immediate close text, and moved text specific to closing or draining to sections corresponding to each. There were a few bits that could be cut, and a few that needed to be better aligned with existing text, but nothing major. Most of this is unmodified text. There are a few bits that I had to tweak. I will add comments for those.\nNAME you can add this to your review load.\nThe section on Immediate Close contains some text that is redundant with the section on the Closing and Draining periods. Factor some of this out.\nI think this flows a lot better. Thanks for the time you put into this!Thanks, Martin!"} {"_id":"q-en-quicwg-base-drafts-74a8f3e075a3187727014c376ef36281c24df3dda51a321161c5426980f2e8b9","text":"instead of saying servers SHOULD NOT attempt to parse unknown versions, stick with the factual statement that they will likely be unable to do so. Flips the SHOULD to the second half of the sentence, the expected response. I believe this is editorial, since we're not modifying expected behavior, but it's touching normative language, so please speak up if you're not convinced.\nActually, it seems like there should be a normative MUST NOT attempt.\nI would have said \"cannot\". The contents are unknown, after all. Attempting to prohibit someone from doing something impossible seems like it might be unnecessary (or at least futile). Edit: of course, this change avoids having to deal with that question, which is better.\nWhy do you say it's impossible? You could simply hope that version n+1 was the same as version n in terms of structure and try to parse it as that, ignoring the version number.\nWhich is why the PR says \"unlikely,\" not \"impossible.\" But the impossible thing is for the server to know whether it's doing things correctly.\nRight, but my point is that we should prohibit trying, not just give people advice that it's dumb.\n(Broken out of .) This seems like tuning? Or is the intention to protect the endpoint from an attack, please explain the SHOULD.\nencodings for any version-specific field. In particular, different packet protection keys might be used for different versions. Servers that do not support a particular version are unlikely to be able to decrypt the payload of the packet. Servers SHOULD NOT attempt to decode or decrypt a packet from an unknown version, but instead send a Version Negotiation packet, provided that the packet is sufficiently long. I'm having trouble seeing what's needed here. The endpoint has received a packet professing to be in a version it does not understand. It can't know the semantics or encoding of any field there -- and the paragraph already explains that. Yes, I'm sure there are potential attacks in interpreting a packet of unknown provenance and unknown semantics against a different set of semantics and expectations, but I'm not convinced we need to explain them. The text recommends sending a Version Negotiation packet, which seems like the only sensible response.\nThis refers to .\nNAME\nI'm looking for two things: (i) I think this ought to clarify the intention for the SHOULD, I suspect it is clear to those working on this part, but not clear to me why this is a recommendation - i.e., specifically can you give me a use-case of why the Server would otherwise USE information from a packet from an unknown version, unless the intention is to say the server shouldn't waste resources to do try to do this? (ii) I think this implies understanding of Section 14.1 later in the spec, and a ref would help. So... Is it equally true that.. \"Servers MUST NOT use the contents of a packet with an unknown version, and SHOULD NOT decrypt this packet, but instead send a Version Negotiation packet, provided that the packet is sufficiently long (see Section 14.1).\"? Or perhaps simply explain something like: \"Servers SHOULD NOT commit resources to attempt to decode or decrypt a packet from an unknown version, but instead send a Version Negotiation packet, provided that the packet is sufficiently long (see Section 14.1).\"?\nFundamentally, it's a caution against assumptions which could prove incorrect, such as \"this looks like an unencrypted TLS ClientHello, so I can go find the field that contains....\" The server will either process information in a way that could be used maliciously, or the server will generate a response that won't be understood by the client. Ideally, this would be a MUST, except that it's entirely unenforceable by the client what the server does internally. Perhaps the right solution here is to remove the normative language with regard to that piece and stick to the statement of fact we already have:\nThat approach seems fine."} {"_id":"q-en-quicwg-base-drafts-459caf625c1a79a8023a4dcb0a300f5869298013aa5d19314217e5af06afa58e","text":"The original design for path validation attempted to validate return reachability by requiring that PATHRESPONSE followed the same path as PATHCHALLENGE. We removed that long ago, but the text retained echoes of that choice.\nCurrently, does not specify which path a PATHRESPONSE needs to be sent over. That was somewhat surprising to me, as I had expected it to be scoped to the path that PATHCHALLENGE was received on. If it isn't, could we add a sentence to the Path Validation Responses subsection to explicitly state that a PATH_RESPONSE is not scoped to a given path (and ideally explaining why it isn't)?\nSo I thought we were really crisp about this, but we were not. PATHRESPONSE does not need to follow the path on which PATHCHALLENGE was received. We're clear about this when it comes to preferred address handling, but there is some vestigial text that contradicts this. That should be fixed.\nFor the record, I personally believe that adequately resolves this issue while remaining purely editorial.\nThanks for being so prompt David. I will leave this for a bit to allow others a chance to spot errors, but I plan to merge this before we publish -30 (real soon now, I hope).\nYes, the requirement was removed but the inverse statement was never explicitly added. corrects that.\nThanks for writing this up, tiny wording suggestions, but looking great! Thanks for writing this up, I think this adequately resolves my concerns from ."} {"_id":"q-en-quicwg-base-drafts-2807c5d074fad8a5b6581eef7a8a5e324e836fcd562f106278e711d909cb8579","text":"I believe these two cases are both 3*PTO to allow the PTO to expire at least once before declaring path validation failure and idle timeout.\nThanks for your suggestions, I'm quite happy with this now.\nNAME says in : This is reasonable. I don't know which document needs the text though.\nThis looks like it should go into -recovery, since -transport mostly only refers to as a TP, except in Section 13.2.1 (\"sending-acknowledgements\"), but that text is very general.\nI would suggest that this should go where a value of X PTOs is suggested for each specific timer.\nRelabeling, since ended up being against -transport and not -recovery."} {"_id":"q-en-quicwg-base-drafts-c67c1a1d030594eee8c642251d1a3163f46abcf8a2acdcc6d8f78c134d024496","text":"Error codes do use varints, but packet numbers (in the header) don't."} {"_id":"q-en-quicwg-base-drafts-1e35efb7c5395db3bcd3aceb58445341e493e6968ea9e3226410bb90329b2a63","text":"This check really is triggered by TLS providing new keys, not QUIC delivering data on a higher encryption level. Otherwise, in the case where no session ticket is sent, this condition would go undetected for unconsumed data at the Handshake encryption level."} {"_id":"q-en-quicwg-base-drafts-85983188714c6d428dde0666897c6c1623d8c21ec917fa6bdaf2404222bbd10c","text":"Section 7.6.1: The RECOMMENDED value for kPersistentCongestionThreshold is 3, which is approximately equivalent to two TLPs before an RTO in TCP. Any values that needs to be explicitly forbidden? For example does it need a sentence saying that it MUST be larger than X? I would think that 1 would be to low and allow slow start after just one RTT+variations? I understand that likely would give the application bad behavior but likely also have negative impact on other applications if the QUIC stack ends up doing slow start crashing into the wall and then slow starting again?\nI lean towards not being overly proscriptive here. We could elaborate on the pros and cons of smaller and larger values if you think that is necessary? Exponential backoff is clearly critical, but it's less clear to me that the CWND really needs to be collapsed in these situations.\nI am really just trying to asses if there are significant risks with any specific ranges of values. For example causing very bursty behavior that could be considered harmful or becoming extremely aggressive after a persistent congestion event, in those cases some text may be necessary.\nBased on , labeling this as \"editorial\"\nThinks this resolve my issue, but would prefer Ian's proposal."} {"_id":"q-en-quicwg-base-drafts-9eb4af6ef44407f4a60a131277549f3642e22415128dd5216cfadd56a5a933bd","text":"That term doesn't appear above, so don't use it in the PTO section.\nThanks NAME I took your suggestions with one tweak.\nSection 6.2.1: \"The probe timer MUST NOT be set if the time threshold (Section 6.1.2) loss detection timer is set. The time threshold loss detection timer is expected to both expire earlier than the PTO and be less likely to spuriously retransmit data.\" What does \"MUST NOT be set\" means. The time threshold loss detection 6.1.2 only occurs if there is acknowledgement received by the sender sufficient late after the transmission. Thus, PTO timer needs to always be set. because even if Time Threshold timer fires it might not declare a loss.\nIt means if the time threshold timer is armed, you are not supposed to arm the PTO timer. If the Time Threshold timer fires and doesn't declare a loss, then I would consider that a bug.\nSo the above text about loss timer is only when one are under the condition created by this Section 6.1.2 sentence? \"If packets sent prior to the largest acknowledged packet cannot yet be declared lost, then a timer SHOULD be set for the remaining time.\" Does the formulation in 6.1.2 need to be improved, possibly naming the timer?\nI can see the confusion. This can either be a single timer or multiple, and sometimes we refer to it one way and sometimes as separately named timers, ie \"The time threshold loss detection timer\" We can either be more consistent about naming the timers or stop naming the timers entirely. Would you prefer the former?\nFrankly I don't know which direction that is best. Could I ask you to take an editorial stab on your most preferred. I would guess when you sit down to do either you realize if it will work or not when you harmonize it.\nNAME Does PR resolve this for you?\nNAME we'd need a response, please\nYes, that simple tweak do make it clearer. So resolved.\nThe second suggestion is a little risky, but avoids a slightly awkward construction.Editorial suggestion. I'm fine with merging this, but NAME should probably see if it addresses his issue."} {"_id":"q-en-quicwg-base-drafts-c39799dff19a80c06e7138b69f88fd6eea278639e13a9b271a1591c7ba6a02cc","text":"There are three places which refer to \"Section 9.3 of QUIC-TLS\". I think they should refer to \"Section 9.5 of QUIC-TLS\".\nThanks!"} {"_id":"q-en-quicwg-base-drafts-64c75411746ef94ecf99a39eeb8ac2c31248065d87565ef5889fb7cc0e564c88","text":"Fixes several nits from the gen-art review, which basically all come down to defining terms appropriately and/or referencing their definitions. modifies definition of \"stream\" and \"stream error\"; pulls in more terms from SEMANTICS references SEMANTICS for definition of fields rewords to avoid the term \"hop\" and describes intermediation more explicitly rewords to avoid saying \"not possible\" and copies language from SEMANTICS advising against retries updates reference to definition of CONNECT in SEMANTICS. NAME your review would be appreciated.\nIn the HTTP genart review, NAME writes:\nYes, CONNECT only supports TCP, and there's no short-term plan to change that. There is separate relevant work such as but I don't think we need to change anything in the draft here - it pretty clearly mentions TCP so I don't see any ambiguity.\nThis question should be addressed by the definition of CONNECT the spec is referring to. It's worth double-checking the references in that section, though; it points to HTTP11, when I think there should be a definition in SEMANTICS.\nIn the HTTP genart review, NAME writes:\nPresumably a reference into SEMANTICS; I'll look for the right section to point to.\nIn the HTTP genart review, NAME writes:\nIn the HTTP genart review, NAME writes:\nIn the HTTP genart review, NAME writes: \"HTTP/3 stream\". Perhaps it's worth defining \"HTTP/3 stream\" explicitly, in analogy to \"HTTP/3 connection\". Perhaps it's worth defining \"message\", too (this is only an HTTP concept, correct?). The document should make sure it clearly distinguishes terms and concept of QUIC vs. of HTTP/3 in all cases. In particular, what is the relationship between HTTP/3 frames and QUIC frames? It's important to make their difference and relationship explicit, as the document uses several other terms interchangeably between HTTP/3 and QUIC, like \"connection\" and perhaps \"stream\".\nA few minor suggestions, but this is a good change.This looks good to me, thanks!"} {"_id":"q-en-quicwg-base-drafts-f40396f7f94f784df94465c1b59425ab80c16f02b963df82cf7d1c38cfc6846d","text":"Minor nit while looking at ; the definition of generic \"Long-Header Packets\" won't actually match any real packet types, because it ends after the CIDs. This modifies the definition to add a final \"Type-Specific Payload\" section to the figure.\nWhile the transport draft talks a lot about \"1-RTT packets\", the term is not defined. We also have a sentence that seem to imply that a long-header variant of 1-RTT packets might exist (URL). We refer to the packet less frequently as \"short header packets\" or \"packet with a short header.\" As a resolution, I think we might want to change the title of section 17.3 from \"Short Header Packets\" to \"1-RTT Packets\". Rest of the text can remain the same, as the first sentence of that section says that it is the only type of short header packet that we define."} {"_id":"q-en-quicwg-base-drafts-d003cb3a76d40d0589874ffeaac5f316118c169f7d156ee7b8641567341d0618","text":"I think this helps?\nMagnus Nystrom : scenario of \"mutual distrustful parties\" is unclear to me. One stated scenario is where multiple client connections aggregate at an intermediary that then maintains a single connection to an origin server. Another stated scenario is where multiple origin servers connect to an intermediary which then serves a client. In these scenarios, is the concern that the intermediary may control either a client or an origin server and thus would be able to leverage the compression context at, say, a second client and then observe (as the intermediary) the result of a guess for a field tuple? It may be helpful to explain this in a little more detail. as to what strategy (option) implementations should choose to protect against this attack. It seems like the Never-Indexed-Literal is a good candidate which should be easy to implement.\nThis text was mostly lifted from RFC 7541. intermediary may control either a client or an origin server and thus would be able to leverage the compression context at, say, a second client and then observe (as the intermediary) the result of a guess for a field tuple? Yes, this is the concern. It's interesting that 'intermediary' here also refers to a browser that coalesces requests from different tabs onto the same connection. I agree. as to what strategy (option) implementations should choose to protect against this attack. I think the document recommends the tagging option unambiguously: \"An ideal solution segregates access to the dynamic table based on the entity that is constructing the message\" While never-index literals is the less complicated fallback option.\nNAME so are you proposing any changes to the text?"} {"_id":"q-en-quicwg-base-drafts-52cb57ba3d1394e138b77f1dc2bdaa26661ec85340d2d181e298b7744bc96232","text":"In the Recovery genart review, Vijay Gurbani wrote:\nBased on the proposed PR, I'm marking this as editorial.\nIn the Recovery genart review, Vijay Gurbani wrote:\nBased on the proposed PR, I'm marking this as editorial."} {"_id":"q-en-quicwg-base-drafts-915dee9b837ebdffba501a439c6046901eadca77d30ef6bf86ac2a0dce895b14","text":"IANA's review wanted to know whether the 148 quadrillion GREASE values should be listed on the registry page as Reserved, and what note should be present if not. They should not, lest it be challenging to actually download the registry page within the lifetime of the human race. This rewords the restriction and clarifies that the restriction should be the note.\nNAME Can you add the same text to 22.2 (Transport parameter registry) in the transport doc as well?\nNAME said\nWe believe the note, updated based on IANA feedback during Last Call, is sufficient. Reserving 148 quadrillion values in the registry would cause the registry to be impossible to download in the lifetime of humanity; see . The purpose is described in , to exercise the requirement to ignore unknown frame types (a.k.a. GREASE). Proposal is to make no change; this was already covered during IANA review.\nIf IANA has agreed, then I am fine with it. BTW, I was not asking to make those zillions of reservations but leaving a note in the registry ;-)\nIt's a fair question but the WG and the responsible AD are satisfied with our IANA considerations at this stage, closing with no action.\nPlease see the for full context. This issue is tracking the following IANA question:\nCopying my e-mail reply here: unknown values; having an explicit list encourages an explicitly defined reaction. This also applies to and , where the latter has better wording.\nNAME replied to IANA . Closing this issue.\nActually, NAME said in response to the e-mail that this guidance should be in the document: Let's re-open to track. I don't think all instances of this question need separate issues, though.\nSo the related questions were: Should the GREASE values be listed in the registry as Reserved? No, because some rough estimation suggests it would take a few millennia to download the registry page if this were the case. Should there be a note indicating which values cannot be assigned? I'm of two minds, because while I really don't want people embedding the formula into their parsers, I do want people embedding the formula into their generators. says no Reserved listing, yes note.\nI think this is proposal ready. Forwarding Mike's response to the list, to there is an archive to link to. Lars >\n"} {"_id":"q-en-quicwg-base-drafts-327e893912365b7552643d49950c2cc239fabc173259e0a5f0257815720184ab","text":"The first strikes \"TLS\" from encryption level The second removes misleading text about coalescing I'm suggesting that we replace it with explanatory language about encryption levels and packet number spaces. This context is now missing as a result of moving the referenced text to -transport.\nRoman Danyliw said:\nI'm going to suggest that the text I propose in addresses this first part of this. The higher/lower stuff seems like it could be better, but it's a lot of work to do carefully. I'm going to suggest that we could live with this, but I might leave this issue open and try to find a fix once other more pressing issues are dealt with.\nNAME said:\nThis is really two comments, but I think we can dispense with both at the same time as the first is really trivial: The use of \"TLS encryption level\" is an unnecessary redundancy and inconsistent with other usage, so we can easily remove \"TLS\" there. The start of that paragraph is cruft. I'm going to suggest we only keep the last sentence, which was unquoted in this issue. But then I see that we are missing important context, so I'm going to suggest a little bit of text to replace it.\none suggestion, but lgtm"} {"_id":"q-en-quicwg-base-drafts-23b9327700c0cf4a1f99b846ab536e7b174e0140357aac8da4e5fb2b70ef3212","text":"Ben observed that this might be at odds with the definition of the interface. This is true, but the illustrative value of the current structure is valuable. More text on that might help.\nNAME said:\nI think that the multiple invocations is enough to call out. However, the retrieval of bytes to send after processing some data is not necessary. For reference, the is: This does not necessarily imply that you pull after every byte provided. As the bytes here are provided in a coalesced packet, it is reasonable to defer the \"Get Handshake\" call until all of the bytes arrive. I know that our stack does, though this is not uniformly true. to Handshake Complete? YES! Thanks. That's going to be awkward, as it involves QUIC-layer messages, but I think we can do that. (In a separate PR though.)"} {"_id":"q-en-quicwg-base-drafts-ac84611302b620a4ad3f8e03b8ac06323d6630efb77e86a34e72ee6f86218452","text":"NAME I'll leave that in your hands.\nNAME said:\nSeems reasonable. I took the opportunity to reword a little.\nPerhaps this PR should also fix the text in 8.1 to simply state and point to 14.1 in -transport. That text is redundant."} {"_id":"q-en-quicwg-base-drafts-9538c938070fd8e80c46f3898e52450ab5474c4129b5b5550899a8cec7c3f691","text":"As agreed via mail.\nNAME said:\nResponse provided via email.\nThis is probably something we need a consensus call on, even though it's a change only in theory.\nReturning to an open state as the chairs directed in their plan.\nClosing this now that the IESG have approved the document(s).\nThis gets the job done, with or without the suggested rewording."} {"_id":"q-en-quicwg-base-drafts-cef7d79a8ac8606d5a3f8290687cbfe3fcb76708081313402bf3d56b26573eac","text":"This makes it clearer that the notation is copied from v1, not defined there and normatively referenced. (Note, figures aren't normative anyway.)\nÉric Vyncke said:\nIn my response to , I explain this. This ensures that the transport reference can remain informative.\nPerhaps the intro needs to be reworded to indicate that it's using some of the same conventions as the transport document, which are reproduced below?\nÉric Vyncke said:\nBarry's point (captured in ) was that one or other might need to be normative. This is a different point, suggesting that this document concretely depends on the transport draft for terminology that has normative effect. Haven't checked, I don't think that is the case and the reference can remain informative. The reason for that being that all of the terminology used in this draft is also defined in this draft.\nAny other bids?"} {"_id":"q-en-quicwg-base-drafts-e2e96f824d329c4be243d563ac5c682d4f8c536b5822fc185e614b0299fa27ff","text":"One possible resolution to ; all literals are zero-padded to full bytes.\nIANA made the following observation while creating the H3 registries: While the IANA copy of the registry uses two digits throughout, the QUIC draft is similarly inconsistent about whether literal hex values have any leading zeros. Is it worth updating our hex literals to use full bytes (, , etc.) or synchonizing on some other format? We probably shouldn't synchronize on trimming, since IANA has already skewed the other way.\nNAME did fix this or do we need to inform IANA also?\nWe should probably shoot IANA an e-mail to update the registries with leading zeroes where they hadn't already assumed them.\nI'll let you do that. The only registries that are affected are in HTTP/3 and QPACK.\nFixed both in the registry and the documents.\nYeah, this is easy to understand. The tables must have been annoying."} {"_id":"q-en-quicwg-base-drafts-72b771bdb760344b3ffef76cc70915b04ffc21529ec5d9c201ce68023952af84","text":"This one slipped the net.\nEither way is fine by me. Just ship it already :PThanks, Martin!"} {"_id":"q-en-quicwg-base-drafts-493291ddc3a6e6d13762636797fed8dc8f50f3f0fbd0dc4e9a994d115c14bddd","text":"Removed one HTTP/2 reference which seems extraneous, and corrected the polarity of another. This by doing almost nothing; speak now if you think more is warranted.\nRFC Editor says: But fair enough -- are we overdoing it?\nHaving looked at the references, they all seem reasonable with the possible exception of PUSH_PROMISE noting it behaves \"as in HTTP/2.\" Most references to HTTP/2 occur either in the first two sections of the document or in the Appendix which specifically discusses the relationship between HTTP/2 and HTTP/3. removes that one extraneous reference. Other than than, I plan to make no further changes to address this feedback unless someone wants to provide a more concrete list of places where the comparison is unnecessary.\nMarginal: Section 4.1.1 and 4.1.1.1 seems to have some egregious use of phrasing similar to \"Like HTTP/2, ...\" or \"as in HTTP/2\". You could possibly condense these to a sentence such as \"Like HTTP/2, HTTP/3 has additional considerations related to use of characters in field names, the Connect header field, and pseudo-header fields\". That could be inserted into to thefirst para of 4.1.1, or the second para (thus breaking the para so the next one starts with \"Characters in field names\").\nFixed in ."} {"_id":"q-en-quicwg-base-drafts-a0105fa4bbf2cd55de58bbb174a47804fb1ce22b49e23c2f343a0299e9460aec","text":"These track corrections I've asked the RFC Editor to make directly in their copy prior to publication. It appears that some http-core sections moved since the last time we did a full sweep of our references."} {"_id":"q-en-quicwg-datagram-78f2a183353e421c73bb7270efcf9bd2322d4fba1852c417e098e54687845de2","text":"Clarify meaning of 0 in the datagram transport parameter, based on the discussion at IETF 110. This follows one of the suggestions to have 0 mean no datagrams, and > 0 mean support for datagrams. This should be a non-breaking change with existing implementations.\nThis change is very surprising. Why not take the approach that was discussed in the issue, wherein the limit describes the payload size, and presence of the transport parameter indicates extension support? It's not useful to have a bunch of ill-formed possibilities where the limit describes the frame size but is smaller than needed to encode the frame. See also URL\nNAME this is based on the WG discussion, minuted here: URL\nSo it comes down to preserving the weird semantics nobody wants because renumbering the TP is aesthetically unappealing? That seems like a shame. Unless saying \"I support it, but you can't send it\" is actually useful somehow?\nRight now, the TP specifies a maximum frame size, including frame type, length and payload. This makes certain values invalid (0, 1?). Also, since this values practically is a kind of flow control, indicating how much data I'm willing to receive at a time, it's the payload length that's important here, not the framing. For these reasons, I'm arguing to change this to specifying a maximum payload length. Then, the question of what a value of zero means. Should a value of 0 be the same thing as not present or should it mean that only 0 length datagrams are allowed? I think it is simpler to say that a value of zero is the same as not present (i.e. disabled). (Issue copied from , by NAME on 2019-11-18)\nfrom NAME on 2019-11-18: Wasn't the legality of zero-length frames agreed upon in URL I didn't even realize that the parameter wasn't payload size. Strongly agree that it should be.\nfrom NAME on 2019-11-19: I agree that is simpler to define zero to mean not allowed, but it is highly confusing to be forced to require a length of one if you specifically only use 0 length datagrams for heatbeats or similar. It would be better to a have separate indicator to reject datagrams altogether.\nIsn't this indicated by omitting the transport parameter entirely?\nNAME If omitting the parameter indicates that data frames are not permitted, then this achieves the purpose, yes. So there is no need to have 0 means disabling. Hence it is better to have 0 mean 0 length, but still valid. This is the most natural representation, and there is a an actual meaning ful use of these, namely heartbeats. Further, by making it explicit that length 0 is the only valid, you can have a fast or lightweight implementation.\nI wonder if we need this limit at all. It introduces implementation complexity and it's unclear why any implementation would like to limit the size of DATAGRAM frames (or payloads) they can receive. We already have for implementations that have a limit on their stack size. Does anyone have a use-case where they'd like to limit the size of DATAGRAM frames (or payloads) specifically?\nI'm all for removing the limit. As I have it coded up in MsQuic, I always send a max value. It's not even configurable by the app. I don't think \"I have a buffering limit\" a good reason to have a limit. It's limited by a single packet already. That should be enough IMO.\nI don't think it's particularly useful to set this smaller than a packet, given that a reasonable buffer is bigger than that, and I'm not sure what a peer could usefully do with the information that the limit is larger than a packet. :+1:\nThe more I think about it, limiting the size of the QUIC DATAGRAMS (frames or payloads) is weird. In part because there is no wiresignal to describe a max number of DATAGRAMS per packet, and we seem to have discounted the need for an internal API to configure how frames are composed in a packet. I don't hear anyone with use cases; removing the limit removes a whole load of edge cases and/or implementation defined behaviour. I think the simplification would be a win.\nNAME One reason you might want to limit Datagrams is if you have a slottet buffering system for datagrams, for example 1K, or 256 bytes per datagram, or a paged system of multiple datagrams. In this case you cannot always fit a whole package into the buffering. For streams you have an entirely different mechanism, and other frames are generally not large.\nNAME that's an interesting thought - is this something you're planning on implementing and have a use for, or is it more of a thought experiment?\nNAME Unfortunately I've had to push back on my QUIC dev, but I did work with an emulation of netmap, on top of select / poll, that uses slots to receive packets from the OS without context switching. Sometimes a packet needs to span multiple slots, but often it just drops into a 2K slot or so. I could imagine many use cases where datagrams are redistributed to other threads or processes in a similar manner without having an explicit use case. I believe there are now several kernel interfaces that uses a similar approach.\nAs far as I know, those kernel interfaces all use slots that are larger than the interface MTU, so I don't think they'd require reducing the payload size. Am I missing something?\nWell, I can only speak for netmap, but it works by having the user specify one large memory block divided into a number of equal sized slots of a user specified size. Each slot has a control record with some flags. A record can indicate that the slot is partial and reference the next slot in the packet. So you can can choose a slot of 64K and waste a lot of memory, or choose a slot of 512 bytes and frequently have to deal with fragmenting. Or chose 2K for a use case where you know that you will not have larger payload and can afford the space overhead. EDIT: to clarify, it is the network driver that chooses to link data and update the control record when receiving, and the user process when writing. I'm not sure exactly how the other kernel interfaces behave but from memory it is rather much the same. For QUIC datagrams it is a bit different in that you likely want to avoid fragmentation at all cost due to the added complexity and the non-stream nature of the data. If you could afford fragmentation it would make less sense to have a limit on the size.\nI support switching to TP specifying max payload size.\nIIRC, the original reason this limit was introduced is to allow QUIC proxies to communicate the max datagram size it gets from the MTU between itself and the backend.\nIn my experience, a MASQUE proxy doesn't know that information at the time it opens a listening QUIC connection., and there is nothing to say that all backends would share the same value.\nThis is not for MASQUE, but for a direct QUIC-to-QUIC proxy.\nOK, do you expect the max DATAGRAM frame size a backend expects to receive is smaller than the QUIC packet size it is willing to receive?\nIf we do have a max frame/perhaps size still defined, perhaps it makes sense to let this be updated by a new frame (MAXDATAGRAMSIZE).\nA MAXDATAGRAMSIZE frame would be tricky because of reordering. In QUIC, all the MAX_FOOBAR frames can only increase limits, but here I suspect for this to be useful we'd want to be able to lower the limit which greatly increases complexity.\nI expect this to be fairly uncommon, but whenever this does happen, it would make the entire connection unreliable for datagrams, meaning the applications would have to do their own MTU discovery on top of the one QUIC already does.\nThe default ought to be \"whatever fits in the PMTU\".\nYes, you have to do PMTUD. (What is pernicious here is that if you get too good at that, the value that you resolve to might not be good forever because your varints will use more bytes over time and so reduce available space.)\nNAME which varints will use more bytes here?\nI was thinking about flow IDs. If you are not reusing flow IDs, there is a good chance that those will get bigger over time. But I guess that assumes use of flow IDs. Mostly, I guess that DATAGRAM can be each sent in their own packet/datagram, so the size of other frames won't affect the space that is available.\nYeah the flow ID is part of the payload as far as this document is concerned. Our implementation will try to coalesce DATAGRAM frames with other frames, but that doesn't impact the max possible payload size because if the DATAGRAM frame doesn't fit with other frames we just send it in its own packet."} {"_id":"q-en-quicwg-datagram-580ede1a7e5ce6a3793823a4ce3c3235ea1dc3009e74b991cd4b1285bc9ac405","text":"Some applications might want to send messages, and not care about the order of delivery, but still get reliability. QUIC streams give them reliability, but within a stream it enforces order of delivery. This datagram proposal gives unordered delivery of messages, but without reliability. Why not provide datagrams with optional (opt-in) reliability? It shouldn't be a lot of extra work for QUIC implementors to provide reliable datagrams as well as unreliable, given that retransmission is already there for QUIC streams. (I guess an application could create a separate QUIC stream for each unordered reliable message. That might work, but seems ugly and may have some overhead.)\nWhat you describe can already be accomplished using streams: each “reliable datagram” is its own stream. That provides reliable delivery without ordering between independent datagrams. I don’t think we need to add optional reliability to the datagram spec.\nStreams are very lightweight, and well suited to this task. Beware applying intuition from TCP too heavily.\nSo, here's a suggestion then: since I'm sure I'm not the only person who is going to think of this, maybe add some verbiage to the spec explaining why supporting reliable datagrams is unnecessary?\nSounds good. We can keep this issue open as requesting an editorial change."} {"_id":"q-en-quicwg-datagram-43ab4f9cbafef0943723a8127b3ec8cd50cdb3e9744f400a55b175105aff4367","text":"The IANA section was added before the QUIC registries had a notion of permanent vs provisional, and we then forgot to update it. This PR fixes that oversight.\nLatest change looks good to me\nNoticed during Shepherd review. All of the core types registered in RFC 9000 simply link to the relevant document (RFC 9000) and section in the document where the protocol element is defined. In contrast, in datagram's IANA considerations section, the transport parameter and frame type registrations include the required field with some text that is already included in the document. This will manifest as a weird juxtaposition in the IANA table. I suggest you simply reference the section 3 and section 4 respectively. If you really want some prose (I don't recommend that) then it can be added as a note to the table.\nI noticed that when writing and I agree, we should just list \"This specification\" in there as most RFCs do.\nNoticed during the shepherd writeup of IANA considerations: The document is attempting to register: maxdatagramframe_size Transport Parameter with value 0x0020 DATAGRAM frame type with values 0x30 and 0x31 According to IANA (URL) registrations can be of different types. Here the requested values are in the bracket. So I suggest that the document state clearly that this is a request for permanent registration of these types.\nAgreed. Wrote up to fix this.\nLGTM, thanks!"} {"_id":"q-en-ratelimit-headers-f2bbcc85de84c270a3a3eba117e5e0d42f1f8f23fe1e21e2292bbf7a2f3edead","text":"See .\niirc there was a discussion on stuff. In general I'm ok with just NAME do you see any issue with that?\nNo issue, I was aware of the BCP and I think this case fits the first provision for segregating the name spaces in the Appendix B, so any one of the proposed solutions there, including yours, is fine with me."} {"_id":"q-en-resource-directory-2e77b9c04ee89eadc0d7b03c6c34158ecb0333c6c383fddd6ef7c2ca9fd92c4f","text":"This decides the open question of how to match the in a query in favor of client simplicity. Closes: URL NAME NAME would that change be OK with you and solve the issue?\nI know what an absolute form is, where do I find out what a path absolute form is\npath-absolute is an ABNF in RFC3986, described as \"begins with a '/' but not '//'\".\nI agree with the addition. This means that one can match against an absolute registration target as well.\nJim in :\nIt's a trade-off between not being a surprise to the RD implementation (with b, it needs to compare values that it has not seen in that form) versus not being a surprise to the lookup client (with a, it queries for something it does not see in that form in the query result). I think that the latter is preferable. Here are some more questions on the document. The registration text in section 5.3 says that the interface is idempotent, however it is not clear to me just how idempotent this is supposed to be. It is obvious that the goal is to make sure that a different resource is not created, however I am not clear what is supposed to happen with some of the parameters such as the 'lt' parameter. One way to implement this is to just make it a POST to the previously existing resource which would inherit old attributes on the endpoint. The other way is to say we are going to create a new resource, it just has the same address. It is not clear which, if any, of these two methods is to be used. Never mind I found the answer to this one. Need to think if I want clearer language. When doing an endpoint lookup, I assume that the 'lt' parameter would be the same as the one registered at creation. Is there/should there be a parameter which is a TTL value? How are href comparisons done. a) compare against what was registered b) Resolve what was registered and the href in the current context and compare c) something I did not think of compare on group for remote endpoint. Should this do the remote lookup or can it just be ignored for attribute comparison Value of lt = should it be the set parameter or the time left - how would time left be represented outherwise - should it be represented. Inferred context based on source IP when an update is done. If no con is provided, is that supposed to be checked again to see if it is \"right\"? Only if we did the infer to begin with? I don't understand the presence of the content-format for the empty message posted for simple registration. This seems to be odd if the content is empty. 8 Is there supposed to be a way for simple registration to return an error message? As written this is not necessarily done. Specifically, the response to the post can be executed before the query to /.well-known/core is done and the registration is attempted."} {"_id":"q-en-resource-directory-5170a10cc5e69740df297d7b7f53d31e6eaa2b9def5a8bff05398b2eee82a74a","text":"These should with exception as described there.\nMerging for lack of objections and as it makes further development easier.\nThe editorial points of : (plus from \"mostly mandatory\" line: find better term for \"optional\" and \"mostly mandatory\") I'm in progress of working them in, they'll come in a single PR.\nThe \"For several operations\" part I kept back pending and , which will affect that as well; the rest is pending review in , which I'll merge soon unless objections arrive.\nNote to self: The \"web interfaces\" got lost somewhere: Dear authors, below are some of the comments to the draft, they are mostly editorials with few ones that may warrant further discussion. p1 - \"Web interfaces\" is often used interchangeably with \"REST interfaces\" it'd be better to use one only, maybe the latter. p3 - \"with limited RAM and ROM\" Adding a reference to the expected RAM and ROM would be good. URL provides an estimation of : -> : : Which in markdown would look like: ~ p22 - Endpoint Name \"mostly mandatory\" is ambiguous. If the ep name is not needed to register to an RD then it is optional. If, on the other hand the RD needs an ep name and assigns one then it is mandatory ( btw if the RD assigns the ep name, shouldn't it return that information to the ep, together with the location within the rd, once the registration is successful? ) Github Issue 190 (below) raises the need to define a bit better what the ep name is supposed to contain. To me, we could keep it mandatory and add some guidelines for its creation, in OMA for example they are defined as an URN URL p22 - URL \"All we know from the draft is that it is ≤ 63 bytes. But we need to say that it is a UTF-8 string. Not entirely sure we want as an endpoint name either. Same for sector.\" p23 - Sorry for asking, but why is base URI only mandatory when filled by the Commissioning Tool? Wouldn't it be better to make it always mandatory to avoid the differentiation? p26 - Section 5.3.1 seems like could use a bit of reordering, I can explain this in another email or call with the authors. p30 - lt, base and extra-attrs were already explained in 5.3 you might want to keep it as it is or change it by adding a reference to 5.3 and comment on the changes that the Registration Update has over the Registration. p33 - As per the document structure, there are two interfaces: registration and lookup. Shouldn't they be at the same level in the document? Registration is 5.3, shouldn't lookup be 5.5 instead of 6? p41 p42 - Just to clarify, is then the tuple [endpoint,sector] the way RD identifies endpoints? Or is it derived from the device's credentials? \"A given endpoint that registers itself, needs to proof its possession of its unique (endpoint name, sector) value pair.\" - \"An Endpoint (name, sector) pair is unique within the et of endpoints registered by the RD. An Endpoint MUST NOT be identified by its protocol, port or IP address as these may change over the lifetime of an Endpoint.\" p44 - typo address/Address"} {"_id":"q-en-resource-directory-4d10181c7312464a76da39ce42fd5a4a54b614373e00667da86cf2c2f831ee1c","text":"Closes: URL\nIn the lookup filtering description, we talk about how href can be queried for (that it only applies in the URI form), but say nothing about anchor (to which the same applies in principle, with slightly different statements about application to the endpoints). Thanks Christer for pointing that out. As RFC6690 contains a lookup example for ?anchor=, we may want to add one here as well."} {"_id":"q-en-resource-directory-cc20d9211c4d5bc442bc75087d662fe98f1b352b428bf343d70cf1fe28cf5a0c","text":"Something around \"whether many registrations are expected over time\" is supposed to be said according to BCP26, but it seems this belongs in the shepherd write-up, where it is now being added. To be merged right away."} {"_id":"q-en-resource-directory-d903532232fda5d3faef0e4c689340c93bf59f4a2157b722d9df8772c79ae4e7","text":"Part of the responses. Will merge (and upload as part of a -26) if there is an ACK from authors or chairs.\nBefore the 2020-08-13 telechat, some reviews have come in: : Not Ready Minor: Clarification on \"subject\" -- my intention was to include anything \"subject\", but I don't really know what else to put in there. Minor: Why is the first attempt special? -- Can clarify, PR to come. [edit: ] Nits addressed in URL More nits (IDnits) do not require editing, but some explanation -- where would that go? Shepherd writeup? (that would be → NAME : Ready Notes that the DDOS mitigation section is a bit history heavy. I had similar comment in URL; should we use that comment to shorten here? [edit: If so, would want an ACK.] \"It is my impression, that Security Considerations were mostly written having in mind that (D)TLS is always used, however it is only \"SHOULD\" in this draft (or even \"MAY\" if we look at RFC6690 which Security Considerations this draft refers to). I think that adding a few words describing which consequences for security not using (D)TLS would have and in which cases it is allowed will make the Security Considerations more consistent. \" -- [edit: covered in ] I'll issue PRs for what I think we can do something about quickly, but won't merge them or create a -26 without an ACK on them (given we're late in the whole publication stage and my impression is that ill-coordinated uploads do more harm than good) and will follow up on this comment when they're ready for review.\nWith the additional reviews that have come in today, I see no point in addressing some of what has come in in a pre-telechat update (as we can't resolve all the discuss points) -- taking the urgency off this. I'll still file PRs for changes resulting from the reviewer comments. Procedurally (so this goes to NAME or NAME Is the general expectation with IESG comments that the text be further improved to answer all the points (especially when they are comments), or can a plain \"we did it that way because\" suffice? In the latter case, would it make sense to collect point-to-point responses here so they get author or WG review before \"we\" answer them?\nWith URL and its follow-ups sent out, this is closed; any further comments will be opened as new issues. Hello Reviewers, CoRE and IESG members, thanks for your input to the Resource Directory document, and your patience while it was being processed. Before the individual point to point responses can go out, this mail is to address the general status, and answer a few points that were brought up by different reviewers (it will be referred back to in the point to point responses). In summary, the points raised in the reviews led to the changes of the recently uploaded [-26], and where no changes were made, the reasoning behind the choices in the document is outlined below or in the following mails. (Either way, short of aggregated nits paragraphs, all points are responded to in the following mails). A few larger issues evolved from the review comments being discussed in the interim meetings, so please expect a -27 at a later point (after IETF109) with the changes mentioned below. Kind regards Christian [-26]: URL Preface References into the issue tracker are primarily intended for bookkeeping and to give you access to the altered text if you're curious -- but for general review purposes, the responses are self-contained. Anticipated changes We have opted to submit a new version that should clear out all the other comments, so that the remaining work can be focused and swift: Replay and Freshness (OPEN-REPLAY-FRESHNESS) The comment on DTLS replay made us reevaluate the security properties of operations on the registration resource. While these are successively harder going from DTLS without replay protection to DTLS with replay protection and hardest with OSCORE, the text as it is is still vulnerable to an attacker stopping a reoccurring registration, or undoing alterations in registration attributes. Mitigation from a concurrent document (the Echo mechanism of I-D.ietf-core-echo-request-tag in event freshness mode) is planned, but the text is not complete yet. Necessity of the value in all responses The requirements around Limited Link Format and the necessity to express all looked up links in fully resolved form have stemmed from different ways RFC6690 has been read, both in the IETF community and by implementers. A comment from Esko Dijk has shown a clear error in one of the readings. That reading is single-handedly responsible for most of the overhead in the wire format (savings can go up to 50%), which is of high importance to the CoRE field. Due to that, it is the WG's and authors' intention to remove the reading from the set of interpretations that led to the rules set forth. The change will not render any existing RD server implementation incompatible (older RDs would merely be needlessly verbose), and the known implementations of Link-Format parsers can easily be updated. The updates to the document could not be completed in time with the rest of the changes, because a) it requires revisiting the known implementations for confirmation and b) will affect almost every example that contains lookup results. Server authorization (OPEN-SERVER) The two areas of \"how is the RD discovery secured\" and \"what do applications based on the RD need to consider\" jointly opened up new questions about server authorization and the linkage between a client's intention and the server's promises that may easily not conclusde in timeframe reasonable for RD. While steps have been taken to address the issue both on the RD discovery and the applications side (URL), the ongoing discussions in the working group may turn up with a less cumbersome phrasing for these parts. There's a few small open issues Esko brought up on the issue tracker that could not be addressed in time; most of them are about enhancing the examples, which best happens after the anchor cleanup anyway. Repeated topics A few common topics came up in multiple reviews, and are addressed in the following section; the individual responses can be found in follow-up mails, and refer to the common topics as necesary. GENERIC-FFxxDB \"Where do the ff35:30:2001:... addresses come from?\" response: There's a coverage gap between RFC3849 (defining example unicast addresses) and RFC3306 (defining unicast-prefix-based multicast addresses). For lack of an agreed-on example multicast address, the used ones were created by applying the unicast-prefix derivation process to the example address. The results (ff35:30:2001:db8::1) should be intuitively recognizable both as a multicast address and as an example address. The generated address did, however, fail to follow the guidance of RFC 3307 and set the first bit of the Group ID. Consequently, the addresses were changed from ff35:30:2001:db8::x to ff35:30:2001:db8::8000:x, which should finally be a legal choice of a group address for anyone operating the 2001:db8:: site. (The changes can be viewed at URL). GENERIC-6MAN \"Whas this discussed with 6MAN?\" response: Not yet; a request for feedback was sent out recently. (Mail at URL). GENERIC-WHOPICKSMODEL \"How does the client know which security policies the RD supports?\" response: The client can only expect any level of trustworthiness if there is a claim to that in the RD's credentials. Typically, though, that claim will not be encoded there, but implied. For example, when configuring an application that relies authenticated endpoint names, then telling the application to use the RD authenticated via PKI as URL should only be done if the configurator is sure that URL will do the required checks on endpoint names. Some clarification has been added in URL GENERIC-SUBJECT \"Section 7.1 says: '... can be transported in the subject.' -- where precisely?\" response: That text was only meant to illustrate to the reader what such a policy could contain, not to set one (which would require more precision). It has been made more explicit that this is merely setting the task for the policies. As an example, some more precision is given by referring to SubjectAltName dNSName entries. Changes in URL (If you have read up to this point, please continue reading the follow-up mails addressing the individual reviewer's points.)"} {"_id":"q-en-resource-directory-31b84afdb47d64119814b953a9274a5cfe902971027e73611389a2bff7c48a8e","text":"... as otherwise, code squatting again. See-Also: URL Thanks for the updates in the -26; they help a lot. I'm particularly happy to see the (default/example) \"first-come-first-remembered\" policy in § 7.5, which does a good job of documenting its properties (including potential deficiencies) and can be usable for many environments. (I do have some suggestions for improvements to the specific wording in the section-by-section comments, but on the whole I like it.) It does lead in to a more general comment, though (which is not exactly new to the changes in the -26 though it is reflected in several of them): some aspects of the concept of \"authorization\" to appear in a directory, and the verification of authorization for the entire flow from client locating the RD server to final request at the discovered resource, are somewhat subtle. (The new text discusses, e.g., \"repeat the URI discovery step at the /.well-known/core resource of the indicated host to obtain the resource type information from an authorized source\" and \"a straightforward way to verify [discovered information] is to request it again from an authorized server, typically the one that hosts the target resource\".) The aspects I'm concerned about are not matters of a resource server being authorized to publish to a directory or a client deeming that a server is authorized to act as an RD (though both of those are also important), but seems to rather just be relating to the RD faithfully disseminating the information the RD retrieved from /.well-known/core discovery on the given server(s) (or otherwise learned via some other mechanism). In this sense the RD is just acting as a cache or efficiency aid, but the authoritative source of the information remains (in general) the server hosting the resource. To be clear, I think the procedures that this document specifies are good and correct procedures for a client to perform, I'm just not sure whether \"authorized\" is the right term in these cases; \"authoritative\" might be more appropriate, in that the operation in question is to verify the information retrieved from the RD (which acts in some sense as an intermediary) at the more authoritative source. (On a tangent from that note, it seems that there is not necessarily any indication to a client that a server claiming to provide a given resource (whether discovered via RD or /.well-known/core) is actually authorized to provide such a resource and be trusted by the client. But that's not really a property of RD, and rather the underlying CoRE mechanism and any provisioned trust database, so my thoughts about it seem out of scope for this document.) (It's not entirely clear to me what the reader is expected to do with the information about the previous formulation.) (It seems that something also needs to cause the RD to check the authorization of lookup clients to receive the information in question, which might be worth reiterating in this section.) I'm pretty sure the intent here is that only the subject names that are certified by the recognized authority(ies) are stored, in which case I'd suggest to s/all those/all such/. (I can understand why the \"store the whole list\" logic is used, though there may be some subtleties about name constraints on the (X.509) CAs used, which is why we typically don't recomment treating the entire list of names in the certificate as validated from a single operation, instead preferring \"is this certificate authenticated for this specific name?\" when the specific target name is available.) Should we talk about whether the lifetime of registration entries is related to the various lifetimes of these credentials/identifiers? DTLS without resumption is perhaps an interesting case, in that keeping the DTLS association \"live\" just to be able to update the registration information seems impractical, so perhaps some fixed cap would be applied on the lifetime in that case. If resumption is used, the resumption information (ticket or session state) has some finite lifetime that could be used to bound the lifetime of the registration. IIRC an OSCORE security context can also have an expiration time; certificates and JWT/CWT have expiration time as well (though the procedures in the first bullet point would effectively allow a \"renewal\" operation), but the validity period (if any) of a raw public key would have to be provisioned along with that key (or just have RPK be handled akin to DTLS without resumption, I guess). (Similar to my earlier parenthetical comment, the requirement for exact match of all subject name information is unusual. In effect this is a way of pinning the set of names that the CA has chosen to include in a single certificate, which is arguably a stronger requirement than needed but should not lead to any unauthorized access.)"} {"_id":"q-en-resource-directory-f1695f2b396e6b11bcc14aaaacd12e0e521e816f58598ad0c1abd2ee0afe2b7d","text":"... as promised for yesterday (oops)\nad \"what is sub\": it's DNS-SD's syntax for subtypes (RFC6763 Section 7.1). Might also be URL (\"Subtype strings are not required to begin with an underscore, though they often do\").\nLooks good to me.\nThere are several ways specified for finding a host serving an RD: static (unicast or anycast, addresses or DNS? could contain full RD path (as in LWM2M)?) various neighbor discovery derived RDAO the border router (ABRO) \"other ND options that happen to point to servers (such as RDNSS)\" might-be-defined DHCPv6 options multicast (all-coap-nodes + .well-known/core; serves the path by design) There is an open issue to describe how this can be used with DNS-SD (), and it came up 2017-04-07 that discovery via mDNS by those mechanisms might be possible but is not advisable. Can we give a set of discovery options that should be supported by all general-purpose clients and servers to avoid fragmentation where one vendor's clients expect an RDAO while the other vendor's server relies on clients doing a multicast? Can we say anything about which options are recommended to use, and which are rather to be viewed as shortcuts in extremely constrained cases? Using RDNSS would be a candidate for such a statement. Maybe those cases are better described as \"explicitly configured to use whichever RDNSS is announced\", so this is something that's better than static configuration but still not to be expected from a generic client.\nThe new section 4 introduced in URL takes care of this."} {"_id":"q-en-resource-directory-7a1b30a439791aea01f74f10e06ba3cf94bc22d6cf656fb19f25f59519496094","text":"As planned in the last authors' meeting: et should become just one of the registrable endpoint attributes (registry is already being prepared as RD parameter registry, but needs some work) lookup should allow using any of those registered attributes lookup should define how endpoint attributes are used in resource lookups (after all, should be possible even though none of the resource links actually carry that attribute) This PR is not final as it is, but will track my branch as I'm assembling (and possibly rewriting/rebasing) commits for this change.\net is not used by Fairhair, but that should not influence the text, I think\nThe bulk of the announced changes is now in the pull request; I'll go through it one more time myself before I merge it into master. (I might still add a big reshuffling change to the lookup interface, because the text has grown in the past by always inserting paragraphs, and they're not necessarily in a logical overall sequence.)\nNAME I only just now merged the current state of the branch into master, the rebuild just completed.\nSee-Also: Some of what could be the review of this PR has come up in URL\nI don't see any reason to keep this section as pointed out during the last telco. When it is removed some examples i Registration update must be changed. The request GET /rd/4521 should change to GET /rd-lookup/ep?href=rd/4521\nApologies, that will not return the links of the endpoint The request GET /rd/4521 should change to GET /rd-lookup/res?ep=/rd/4521 or GET /rd-lookup/res?ep=node1 I am afraid we need to stay with ep name, difficult to use /rd/4521 as ep identifier.\nThe equivalent requests would be GET /rd-lookup/res?ep=node1 and GET /rd-lookup/res?href=/rd/4521 (where the link has neither the ep=node1 nor the href=/rd/4521 link parameter, but its registration does).\nI think that we don't need the GET as long as there are no PATCHes (and I recently saw we don't even define PUT for the registration resource). The discussion has shown that this is one of the points where our understanding of resolved and unresolved addresses is tested: Can we have a resource whose content is \"unresolved link-format\" to operate on? Or do we need to resolve the links even in there, and any PATCH format would need to work semantically from a shared understanding of both the RD and the client? NAME you wanted to meditate on that; have you found deeper truth, or a path there?\nThis weird indeed: In my understanding this means return all link resource paths with ep name node1 and GET /rd-lookup/res?href=skins/larry/rd/4521 means return all link resource paths with href is skins/larry/rd/4521 and will be empty chrysn schreef op 2017-10-30 13:03:\nreturns resources that either have as their own link href, or are contained in an endpoint or group with that href. This was introduced in because we do not have (due to the extensibility of the link attributes and the lack of a definitive registry) a partition on attribute names that clearly states that one attribute can only be on a group, another only on an endpoint and a third only on a resource. Thus, we need to expect matches from either level -- and once we already have to do this, we can just as well do it with href because it's treated like any other attribute. IMO the alternatives to this would be either having registries set up to partition the names so every query parameter is clearly an endpoint, group or resource attribute (probably not going to happen with resource attributes, as requested and denied in RFC5988bis), or we'd need a more complex lookup language than RFC6690-style search -- something like so we could be explicit there. (See also URL )\nOk, you go faster than my shadow. As long as the text covers the example, and the intention is clear. I will need to read the finished text again. chrysn schreef op 2017-10-30 16:29:\n(Moving some comments here from EMail to keep track:) [Peter] I thought we agreed to remove section 5.4.3, [Christian] I had that taken down as \"we should think about it some more\"; I do expect that interfact to only become useful when we have a way to patch it.\nGiven the current direction of lookup with construction of absolute URI, we can leave section 5.4.3 in\nThe \"RD Parameters\" subregistry is set up in the IANA considerations, but only referenced at the registration interface. The lookup interface allows lookup by parameters of the same names (and page and count are exclusively used for lookup), but doesn't make a reference to the iana-registry. Next steps (→CA): Add a reference to lookup interface description (and reviewing that text, decide on whether that does create linkage to any other existing registry)."} {"_id":"q-en-resource-directory-70e9fc49ac54fbcb5835f08e434fec88033a902d998daac4e6be8fb85b998f3c","text":"This allows registration without endpoint names, which is important for clients that only keep their absolutely essential cryptographic state (see URL, which this closes). The wording in simple registration was changed to not repeat the individual parameters of registration but to only refer there.\nCurrently, we prescribe that the registering endpoint needs to identify itself with at least an endpoint name, while we allow the RD to be configured to assign a domain based on its configuration. We could allow the endpoint identifier to be absent as well, at least in principle (eg. make it a \"SHOULD be set\" but no syntactical requirement on its presence), and the RD's configuration then would need to set an endpoint name (like it can now do with a domain), or the RD would refuse the registration like it can now do for unauthenticated hosts. (The RD interface right now doesn't allow a 4.01 Unauthorized response, even though the security considerations indicate that that can happen.) This would allow endpoints that have just enough persistent state to have their cryptographic identity in form of a PSK to register at the RD. Note that for those cases (and PSK seems not to be too uncommon in the CoRE security scenarios), the RD already needs to have received (or have a way to fetch) configuration about that key, which easily can contain a default (or the only permissible) endpoint name for that key. I'd write a pull request to that respect unless controversial discussion about this starts here. (This was fleshed out from in a discussion with Pekka Nikander during ACM ICN about very constrained devices).\nFirst, domain is optional. The RD should not invent a domain for registrations that did not include a domain parameter. If the RD sets the endpoint name, then a client should be able to learn it from the registration resource, which will then depend on the implementation of read from the rd resource returning the registration parameters in the link for the registration resource, and/or the inclusion of a \"self\" link in the registration resource that includes \"ep\" as a link attribute.\nCurrently, we say that \"When this parameter is elided, the RD MAY associate the endpoint with a configured default domain.\" (And it doesn't say that that default would be the absent domain). It'd need to learn it from the registration resource's metadata (because the registration resource contains right what the host posted, and a self-link would be semantically ambiguous there), but it can be looked up like that: POST /rd: ;rt=\"temp\" Created, Location: /registration/42 GET /rd-lookup/ep?href=/registration/42 Content: ;con=\"coaps://your-address\";ep=\"what-i-decided\" I doubt that those nodes would be really interested in doing that (I reckon they'd even tend to do simple registrations which won't even tell them their registration resource), but they can. Best regards Christian\nTo add to my example, the client could just as well have asked to get the same result, although I'd recommend that a client in that situation should use the query instead. NAME would that lookup interface address your concerns? Unless not, I'd go ahead and write a patch that allows such behavior (but makes it clear that this can only be used where explicitly configured or allowed in a given setting).\nThe changes in implement the proposed. With many things now using registration resources rather than names (cf. ), I frankly don't see a reason to mandate names anywhere (Why would a group need a name? The operations don't require it, and we sure can store one if the user desires; same for the end point…), but I won't stir that any further (we should finish rather than starting such discussions; I'd be game for it, but milestones says hurry), as just allowing the name to be configured is sufficient for the ACM ICN use case."} {"_id":"q-en-rfc-censorship-tech-8f035ac013554c63c3b5f7be8c363071661ff3bf7277ce8d867fa3154339cd3f","text":"From Stephane:\nShouldn't the CA / cert hierarchy be a point of control? For example, KZ is requiring users to install their CA, to allow them to MITM all traffic: URL Another example of CA abuse is ROA whacking, RPKI manipulation, etc.: URL\nAlso got this feedback from Will Scott:"} {"_id":"q-en-rfc7807bis-62cfd15a20e1c404ee38a358939f9c21c786b3a27c0b0278f6e6e8c2423c6715","text":"NAME done; see change. P.S. GitHub lets you make suggestions using the little +/- icon in the inline comment UX :)\nPulling this conversation out of : It's clear that there's demand for error codes scoped to the API, without creating documentation pages for each code and linking to them in the field. There are many ways that implementers could do this, from linking to non-existent URLs to URNs to tags URIs, etc. It would be useful if the specification included examples of this, and made a clear recommendation for this case.\nWe (well, mostly NAME have made some good progress on the other issues raised out of but we haven't looked at this one yet. As strawman to get this moving, how about: we register a URN namespace id, and we recommend that non-dereferenceable URIs should be URNs using that, namespaced with an appropriate domain and raw type id, for example: . Would that work? It would give us a clear solution to recommend, which is explicit but easy to read, with minimal extra complexity or overhead for implementers (imo). I don't have examples, but I suspect there are other standards elsewhere that would appreciate having a problem id namespace for URNs as well. I don't love using a domain for namespacing a non-locator, but I think some namespacing is preferable, and I'm not sure what other namespaces would be suitable. Alternatives very welcome :smile:\nWe penciled in the URI prefix for registered types as \"URL\" in , but that was not finalised. I think registering one or more problems as exemplars is a good path forward for this and other issues (e.g., as discussed in ). We need to be very careful to set a good example in them, which means we need good candidates; I'm thinking 'JSON Validation Error' and 'XML Validation Error' might be appropriate (perhaps along with corresponding wellformedness errors). If we choose a urn, I suspect there will be considerable pressure to use or something similar, as per .\nmy understanding of the \"urn:ietf:params\" namespace is that it is meant to be used for parameters managed/defined by IETF specs. i think the original's suggestion intent was to have a \"safe prefix\" for a namespace which then would be \"unmanaged\" with some conventions around that, such as using domains names. i think the idea of giving examples (and even a suggested namespace along with conventions for using it) is a good one. that way we can clearly support both scenarios: if you're planning on making documentation available, use a resolvable URI but keep in mind that this best should be managed in a way so that it works for a good number of years. if you're planning on just identifying problem types, use a non-resolvable URI, and here's a recipe for doing it.\n:+1: exactly I think is relevant in that it would be helpful to everybody if we can follow our own recommendations in our own problem id registry, but it's otherwise independent.\nthe difficulty may be that IETF has an institutional approach for managing parameters (urn:ietf:params:http-problems mentioned by NAME above), whereas most orgs minting problem types probably won't have one and won't create one. which may lead to us to saying three things about what kind of URI to use: check out common problem types registered under urn:ietf:params:http-problems and consider reusing one if it fits. use URIs when creating dereferencable problem type identifiers. when using non-dereferencable problem type identifiers be clear about that by using non-dereferencable URIs, and here's an URN-based recipe how to do that.\nAh, I see, ok. Yes, that makes sense, and given that the 3 part recommendation seems reasonable to me.\nto be used for parameters managed/defined by IETF specs. No, it's for parameter name spaces managed by IETF documents -- which this would be. We could also give an example that uses a URI.\nthat's what i wanted to say. so these URIs/examples would just cover the common problem types that would be managed by the spec. which then would be for those other types that people/orgs are minting and where they say these are not resolvable, right? i think for those terry proposed a URN namespace instead of tag URIs, but either way it simply would help to state the use case (\"your own types, not intended to be resolvable\") and provide a recipe (\"here's one way to do it\"). the main point seems to be: the spec should mention both \"common types\" which may even get a registry, and \"private types\" which probably are using a different namespace, and we could suggest one pattern for doing that."} {"_id":"q-en-rfc8447bis-c97cd9dec30a8c9b9459429f9a7c034b0bfa0414dc4367247a4add0af0576887","text":"Tweaks to add the D values, explanations about D, etc.\nWe might want to check that the subsequent registries all have an appropriate value in the column."} {"_id":"q-en-rtcweb-overview-bba83896596f671c20373486e41abb34e8dcaeb58a57e71a5fd190817e44aa70","text":"Tweaks to future proof document.\nSpencer had the following comment: I found the reference to MMUSIC WG in When a new codec is specified, and the SDP for the new codec is specified in the MMUSIC WG, no other standardization should be required for it to be possible to use that in the web browsers. to be odd. MMUSIC may be around forever, but this work might be refactored at some point in the future. Is the point that When SDP for a new codec is specified, no other standardization should be required for it to be used in the web browsers. Or is there another way to say this? I'm also wondering if the statement is true for any WebRTC endpoint, not just browsers."} {"_id":"q-en-rtcweb-transport-6cf9162643eb78082e4c92fdb7bf1dd226812eb3f5253ff68fae0cc56c53d450","text":"From Ben Campbell's review:\nSecond point is fixed by . I'm not sure the third point is an improvement.\nSSL is used exactly once, as part of the acronym \"TURN/SSL\". This is also the first mention of TURN, but the second was much more convenient to attach the expansion to. Given that it was so awkward to attach them, I included a new section listing the \"used protocols\" instead."} {"_id":"q-en-rtcweb-transport-00b89b3a8bd67d3932ffc86c9dc2af6a3a13f2107bb8204c6f6f012e8bc089e0","text":"As described in URL\nIn the Dallas presentation on “transports” you stated that the reference to proxy authentication would be restored (See URL) but it does not seem to have happened. This is an oversight. (Noted by Andy Hutton)"} {"_id":"q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926","text":"I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way."} {"_id":"q-en-security-arch-3c48a6716e405a7123d90a0b36030e19824788fe3d0468037594358c0b0a6a1a","text":"Pretty sure we'll get a GEN-ART reviewing say to not refer to the WG by name because it will get closed at some point. I basically stole the intro from overview and modified it slightly."} {"_id":"q-en-senml-spec-9535c44ac31f473216191e92083d4557c8d1d592eded0620efd750357ab429d6","text":"As per Adam's comment, clarify that only the element() scheme of the XPointer Framework is supported (as that is the one that is required by RFC 7303).\nLGTM"} {"_id":"q-en-senml-spec-5f5a4efe2d5a9e8930728300d6bcc89e4d97671cf9bcfec772ca75854e454f15","text":"One possible way to address the problem with positive relative times\nNAME Fixed\nLooks like two of you are still working details of phrasing but I like where it is going and all looks about right to me."} {"_id":"q-en-senml-spec-41245364f3c3ec8102ef7f053704318112b537b76e0be0c2216d2ebaf12d7a8f","text":"The link was missing from schema. After this is merged, all the example need to be regenerated.\nGiven we want the l to have a defined integer value in the CBOR table, I think a good approach is define the l attribute as a string here then say what goes in it in the other draft that defines link. This is also good for EXI."} {"_id":"q-en-senml-spec-25ee63c1640e092426f1b45daa2d2a2fc0832cc1bcb1b24bb86a644977673756","text":"Fixes URL\nThis makes some of the examples very long. I preferred the presentation where all attributes of a Record are on a single line, when feasible. Especially for the \"multiple data points\" examples. Anyway, Michael's comment was about indentation, which I agree should be (and is now) fixed.\nSee email from Michael Koster \"Formatting of JSON examples in SenML\""} {"_id":"q-en-senml-spec-19f841a7cc9cb70cefa2f0c0b3b6cb4bdb5e68b3dc6d9829c377a09016a00581","text":"Set of small editorial tweaks and aligned fragment text with new way of positioning base values\nLGTM ( other than we need to decide if we use field or attribute but we can do that as separate PR)"} {"_id":"q-en-senml-spec-a475ff1079bb82b4b7b76115bf5e02c821585d73df47121ba8fe5465e2744623","text":"100 bytes? (Here again I'd remove the text about mesh networks.) mesh networks result in about 80 bytes of usable payload per packet without fragmentation. How about we say keeping it small is good and quote examples of 802.15.4 has about 80 byte payload and that LoRa has 51 byte of applications payload in long range mode and just remove any reference to \"mesh\"\nI'm fine with suggestion above. I'm also fine with just leaving draft as is.\nAddressed by PR"} {"_id":"q-en-senml-spec-12aec5463c17f41b292054d208e5b0b0123dede44719d32be50063806d43fb5c","text":"Dave's comments: The issue is that neither of the above have an asterisk by them, so both are recommended and per Carsten’s definition of “units” during the meeting (i.e., where the reference point is separate from the units), then Kelvin and Celsius have the same units as they only vary by what the reference point is."} {"_id":"q-en-sframe-78cb8e3e27b77edbfed0f03da4df946efe9b0c138232e6ea4510c1cb08140ee8","text":"LGTM\nI have a doubt about why an SRC for the CTR is needed at all and if a global counter for all streams would be equally secure in terms forming the IV or was due to implementation details to be able to have different context for each streams?\nBecause the key is used to encrypt all outgoing stream, we need a global counter for the IV to avoid counter reuse. If we use the frame counter only, then each stream will have their own counter and IV will collide\nIf the frame counter is global for the participant instead that per stream, i.e. increased whenever a new frame is encoded regarding the media stream, there would be no IV collision.\nCorrect. But I imagine this will be more complex to implement, having a single global counter across all media sources including audio and video\nPrepared a PR to change it to a global counter, however I want to make sure I understand how the counter is maintained globally, If the frame N is encrypted in 3 different layers, they will have different counter values right? One potential benefit of having a global counter it will free up some bits in the header which we can use as a keyId (up to 16 keys) which is nice\nYes, each call to the frame encryptor which would require to atomically increment the counter. In the case of the simulcast, the same input image will produce N different encoded frames which would have their unique counter. Regarding the free bits, I was thinking in something like this: If x=0 then K is the key id ( 0-7) if x=1 then K is the length of the key that is inserted before the This would allow to avoid the extra byte for conferences with less than 9 participants. Another option would be to always use the KLEN/KEYID and reserve that bit for extensions in the future.\nI like it :-)"} {"_id":"q-en-sframe-33e39c12b7307afd7c74aa976fc10c7c6152caeca45c3306bcab39000d67adec","text":"This: Changes the name of the draft (I made something up, but you are free to suggest a different name) Adds venue information to the draft (working group, area, stream, github, etc...) Updates some of the boilerplate stuff (README, CONTRIBUTING, .gitignore, Makefile) Adds workflows for CI with GitHub Actions Cleans up trailing whitespace (with ) so that the build runs cleanly Updates the Gemfile (I don't use this, but I tried to make it functional for those that do) From here, you should be able to create a first submission of a draft-ietf-sframe-enc-00 by pushing a tag of draft-ietf-sframe-enc-00 (or making a new release with that name).\nSpec changes look ok since they are editorial/ NAME is this ready to go?\nYou should be able to merge this and use it straight away, if that is what you want to do (the only thing that might need attention is the proposed filename, which I just made up)."} {"_id":"q-en-suit-firmware-encryption-cb59e863845ba0fb42bfbeecde419a9490597874533709df5cd14b801e097e56","text":"moving into .\nKen, here is the reasoning for moving encryption functionality outside the SUIT manifest: When the firmware author needs to encrypt a payload for a specific recipient it needs to know its public key (for HPKE) or its symmetric key (for AES-KW). Since the firmware author does not typically know the recipients ahead of time, it cannot perform this encryption operation. What he can, however, do is to digitally sign the manifest (and thereby the firmware image) since it only requires its private key. Later the distribution system, before delivering the firmware to the devices, will encrypt the firmware images. It cannot modify the manifest anymore since otherwise the signature previously generated would be invalidated. Hence, any information regarding firmware encryption has to go into the envelope. I hope this makes sense. For this reason, I believe the current text in the document is correct.\nI've misunderstood your intention. Are these correct? Author has the raw firmware image and creates by signing the with its signing key. Distribution System receives the raw firmware image and its from the Author, then encrypts the firmware. It appends to (outside the ). Device verifies the and decrypts encrypted firmware image using . My concern is that the would be untrusted because it locates outside the signature. How about this one? The order of suit-protection-wrappers array corresponds to the suit-components array in suit-common it can be if no decryption is required The Device can verify whether the decrypted firmware image is correct using the digest hash of raw firmware image the image digest and the image size are trusted because they are inside the though the s are untrusted because they are outside the The Author MUST aware that the firmware will be encrypted by Distribution System to create appropriate suit-install section\nThe distribution system is not untrusted but rather trusted to do the encryption of the plaintext firmware image and to distribute it to the intended recipients. If the author already knows the recipients ahead of time and has their public keys, then it can also encrypt the firmware image without relying on the distribution system. Hence, I do not believe that the wording that the distribution system is untrusted is correct. Note that the recipient will get to learn that the encryption has been applied by the distribution system since the encryption info is also signed. The recipient (device) can still check the signature of the manifest to verify whether the plaintext image has been unchanged. Hence, I don't see a problem with the approach\nBtw, I agree with you that the manifest needs to carry a hash of the plaintext firmware and it also needs to contain instructions for matching the hash against a specific value. The reason why I believe that no commands for processing the encrypted payload are needed is simply because the encryption must happen before any of the other processing steps need to take place. Otherwise nothing can proceed. Is this the right approach? I don't know.\nI wonder whether we should bring this up on the SUIT mailing list since it is an important design decision.\nThe term \"untrusted\" I used was ambiguous, but I meant the part was outside the signature. Before discussing in the mailing list let me confirm this: Distribution System is trusted case A-1) as a firmware server and a device management infrastructure case A-2) as a firmware server and a device management infrastructure, and also a manifest signer The firmware is encrypted by case B-1) the Author case B-2) the Distribution System With case A-2, the Distribution System can just update the manifest and then update the signature because Devices trusts its public key. With case B-1, the Author can create encrypted firmware and its manifest, so that there is no need Distribution System to modify the manifest. They work well with current SUIT Manifest, but our case does not especially with non-integrated payload. Am I correct?\nI think you are saying that because a non-integrated payload requires a URI to be included in the manifest, which will then be integrity protected. Hence, the distribution system will not be able to change that URL and it will typically not have permissions to replace the content of the referenced file. Is this correct? FYI: I think it would be good to have examples of all these cases in the draft.\nHere is what Øyvind proposes in URL\nIn the firmware encryption draft we have defined an extension to the envelope that indicates what encryption parameters are applied to which component. Here is the CDDL: Currently the text provides no explanation or an example of how this linkage works.\nThis aspect is related to URL and to the mailing list discussion URL\nThis issue is also related to URL\nWith in PR , this issue is solved.\nSUITEncryptionInfo was defined in the past SUIT Manifest core document describing how to decrypt the firmware image, but in this document it may not be associated with any component. I will create a Pull Request to move SUITEncryptionInfo under the suit-manifest.\nSee also URL A new idea to leverage SUIT Dependency for Firmware Encryption has come up to my mind. Please review and make comments, especially for SUIT Manifest usage and security considerations. The Author creates firmware and its manifest The Distribution System encrypts the firmware and distribute it to Devices The Device fetches and decrypts encrypted firmware, then check and install it It introduces new section suit-security-wrapper holding SUITEncryptionInfo, which is appended by the Distribution System The Device decrypts the encrypted firmware with SUITEncryptionInfo and then check it with condition-image-match Distribution System cannot modify the URI in the manifest where the encrypted firmware is stored SUITEncryptionInfo is used without verification because it locates out of the Author's signature The Author and the Distribution System create manifest respectively Distribution System's manifest fetches encrypted firmware and decrypts it, and depends on Author's manifest Author's manifest check the decrypted firmware with condition-image-match The Device MUST trust both Author and Distribution System keys to verify manifests The Distribution System can freely set the URI to distribute encrypted firmware in Distribution System's manifest SUITEncryptionInfo is inside signature created by the Distribution System (Only important parts are shown in this example) (Only important parts are shown in this example) Best, Ken\nInternet-Draft draft-ietf-suit-firmware-encryption-URL is now available. It is a work item of the Software Updates for Internet of Things (SUIT) WG of the IETF. Title: Encrypted Payloads in SUIT Manifests Authors: Hannes Tschofenig Russ Housley Brendan Moran David Brown Ken Takayama Name: draft-ietf-suit-firmware-encryption-URL Pages: 52 Dates: 2024-03-03 Abstract: This document specifies techniques for encrypting software, firmware, machine learning models, and personalization data by utilizing the IETF SUIT manifest. Key agreement is provided by ephemeral-static (ES) Diffie-Hellman (DH) and AES Key Wrap (AES-KW). ES-DH uses public key cryptography while AES-KW uses a pre-shared key. Encryption of the plaintext is accomplished with conventional symmetric key cryptography. The IETF datatracker status page for this Internet-Draft is: URL There is also an HTMLized version available at: URL A diff from the previous version is available at: https://author-URL Internet-Drafts are also available by rsync at: URL"} {"_id":"q-en-t2trg-rest-iot-1178825bbb1a9c9cb3c97250b3ae65772341cbbc8165a3fa70a7841584a88249","text":"Updating HTTP references to RFC9110 and RFC9112. Moving CoAP reference to normative to be consistent with other references.\n; httpbis-messaging/-semantics/-cache are in the RFC editor queue and will be published before this document. URL\nThey actually have been published.\nThanks!"} {"_id":"q-en-tls-exported-authenticator-d900e4b58ba6fc88ddad370a99ea86e29d27bc6ae00ebf22b6e18a41ad0fef9d","text":"TBH I don't really know if this is actually needed.\nWhoops that was supposed to be about it's good to merge. We need this section to tell IANA to register the exporter.\nNAME do you mind amending this to add:\nAdded section about exporter labels as well."} {"_id":"q-en-tls-exported-authenticator-5b61bc0c458a8b068518de7b56cce5e7d2d926a4dfde01f1eaec7760ae783f09","text":"In the TLS 1.3 handshake the client and server do not agree on the client's certificate bytes until after the first server application message. If this is significant to an application, then the server should send an application message before authenticating the client with an EA.\nDone.\nThis is useful information to convey to implementers and users. +1\nNAME the message that started this all is in this . It's the second issue.\nNAME To follow up from the meeting, this is only relevant in the case where the server has requested a client certificate, which is why you can't predict the client . For the Server to send an NST it must therefore have received the entire handshake. Changing this to require the Server to wait for the end of the handshake in not client certificate situations seems a touch heavy handed.\nNAME the text (as shown in the session) says that the server sends something though. But the server can do that without seeing the client certificate. I think that this needs to be something different, perhaps \"the server has received and validated the client's Finished\".\nI'm still trying to work out what threat we are considering here. Is it the server pretending to read the Finished and not or is it the server making an error and not reading the Finished. In the former case, it's not sufficient to send the NST because the client doesn't know what RMS the server has computed. In the latter case, why not just tell the server not to send a request until it's read Finished if there is a client cert?\nAt the end of a mutually authenticated handshake the client doesn't know that the server has agreed on the ClientCertificate's bytes. The threat is that the client's last message might have been corrupted in flight (intentionally or otherwise). I was worried that if we put the text \"MUST verify the Finished before sending an EA.\" implementations might ignore it, so the goal was to make the check something externally visible. I think you're right that the current text doesn't achieve the security goal. I can put \"MUST verify the Finished\" if that works better.\nI think that's the only thing that works.\nNAME Just checking to see when this might be updated?\nChanges pushed.\nNAME Does the final change address your issue?\n[Clarifying] The traffic keys do not depend on the client's final flight at all. So, I don't see what the \"first application message\" does here. The client receiving the server's messages doesn't tell the client anything, and receipt of the client's first application data isn't really relevant here. It's the Finished that matters.\nMaybe something about how we rely on the server to not send application data if the client Finished fails to validate (and thus for application data to not be exchanged in case of tampering), rather than talking about \"agree on the client's final flight\"? That lines up nicely with the new MUST-level requirement to not send EA if the client Finished doesn't validate. So something like \"In TLS 1.3 the agreement on the client's final flight is enforced solely by the server. The client Finished hash incorporates the rest of the transcript, which the server uses as an integrity check over the entire transcript, including the client's final flight.\" and then tweak the last sentence to include a clause about \"in order to ensure that the client's final flight has been validated, servers MUST ...\"\nNAME NAME can you please coordinate and come to a resolution that addresses the remaining comments in this thread? In particular, I think we're seeking clarity on the conditions for agreement and how that relates to the client's finished message (and not necessarily application data, though that may imply agreement, if I'm understanding the intended text here).\nWhat about 0.5RTT?\nNAME NAME can we please get an update here? Would additional editors here help the process?\nYes, the server can send 0.5RTT before the handshake completes. That's why my focus was on \"we rely on the server\", as the client doesn't always have a great way of telling for sure. The application protocol's semantics might be able to clue that a given response was 1-RTT and not 0.5-RTT, but it might not, too. I probably should have said \"close the connection\" rather than \"not send application data\", or maybe \"not continue to send application data\".\nLGTM.\nWhat do people think about putting \"before sending an EA or processing a received EA.\" instead of \"before sending an EA\"? I realised it's possible for the server to receive an EA before the client Finished as follows: Server sends its Finished message and then immediately an AR. Client receives the Finished, sends its own Finished, and then an EA in response to the AR. The client Finished and EA are reordered by the network (the EA might not even be going over the TLS transport). The server might then accept the EA and do some action, but then reject the Finished message. I think this is already incorrect by the definition of EAs, but I thought it wouldn't hurt to make it super explicit. NAME\nRequesting changes to resolve remaining comments. Hello TLSWG and IESG reviewers, This is a compendium of responses to the various reviews of the document. There are a few remaining open questions to address that I hope we can resolve in this thread. I’ve compiled the changes to the document in response to the comments in Github: URL Responses welcome! Open questions for debate Should we define an optional API to generate the certificaterequestcontext? [ns] Proposed solution: no, let the application decide Should we remove the SHOULDs in Section 7, as they have little to do with interoperability? [ns] Proposed solution: leave the text as is. Are there any security issues caused by the fact that the Exported Authenticator is based on the Exporter Secret, which does not incorporate the entire transcript: Using an exporter value as the handshake context means that any client authentication from the initial handshake is not cryptographically digested into the exporter value. In light of what we ran into with EAP-TLS1.3, do we want to revisit whether we're still happy with that? We should in practice still benefit from the implicit confirmation that anything the server participates in after receiving the Client Finished means the server properly validated it and did not abort the handshake, but it's a bit awkward, especially since the RFC 8446 post-handshake authentication does digest the entire initial handshake transcript. Would we feel any safer if an exporter variant was available that digested the entire initial handshake transcript and we could use that? [ns] This is a very good question. I didn’t follow the EAP-TLS1.3 issue closely, but this does seem to imply an imperfect match with post-handshake authentication in the case of mutually authenticated connections. I would like to suggest that Jonathan Hoyland (cc'd), who did the formal analysis on the protocol, provide his input. A comment from Yaron Scheffer/Benjamin Kaduk on the IANA considerations: My understanding is that the intent of this document is to only allow \"servername\" to appear in the ClientCertificateRequest structure that otherwise uses the \"CR\" notation in the TLS 1.3 column of the TLS ExtensionType Values registry, but not to allow for \"servername\" in authenticator CertificateRequests or handshake CertificateRequests. Assuming that's correct, the easiest way to indicate that would be if there was a \"Comment\" column in the registry that could say \"only used in ClientCertificateRequest certificate requests and no other certificate requests\", but there is not such a column in the registry. I think it could also work to be much more clear in the IANA considerations of this document as to why the \"CR\" is being added and what restrictions there are on its use, even if that's not quite as clean of a solution. [ns] Proposed solution: Add text to section 8.1 to clarify why the CR is being added and what restrictions there are to its use, as is done in URL Alternate solution: Propose a new column, as done in URL Comments and responses Proposals to address IESG comments below are here: URL Martin Duke (No Objection): I come away from this not being entirely sure if this document applies to DTLS or not. There is the one reference in the intro to DTLS, but it's not in the abstract, nor anywhere else. Assuming that the Sec. 1 reference is not some sort of artifact, the document would benefit from a liberal sprinkling of 's/TLS/(D)TLS' (but this would not apply to every instance) [ns] DTLS does not explicitly define a post-handshake authentication, so not all references made sense to update, but I updated all the relevant references. If (D)TLS 1.2 is REQUIRED to implement, then does this document not update those RFCs? [ns] It doesn’t change existing versions of (D)TLS and it is not mandatory to include this functionality, so I don’t think that it updates them. It’s a separate and optional feature built on top of TLS. In any case, the text is changed to make it explicit that this is a minimum requirement. NITS: Sec 1. I think you mean Sec 4.2.6 of RFC 8446, not 4.6.3. [ns] It’s actually neither, it’s 4.6.2. Sec. 4.2.6 defines the extension that the client uses to advertise support for post-handshake authentication. Sec 4 and 5. \"use a secure with...\" ? [ns] Added “transport” Sec 4. s/messages structures/message structures [ns] Fixed Erik Kline's No Objection COMMENT: [ sections 4 & 5 ] \"SHOULD use a secure with\" -> \"SHOULD use a secure communications channel with\" or \"SHOULD use a secure transport with\" or something? [ns] Fixed in previous change \"as its as its\" -> \"as its\" [ns] Fixed [ section 5.2.3 ] I think the first sentence of the first paragraph may not be a grammatically complete sentence? [ns] Fixed [ section 7 ] \"following sections describes APIs\" -> \"following sections describe APIs\" [ns] Fixed Francesca Palombini's No Objection Thank you for the work on this document, and thank you to the doc shepherd for the in-depth background. Please find some comments below. Francesca TLS (or DTLS) version 1.2 [RFC5246] [RFC6347]. or later are REQUIRED to implement the mechanisms described in this document. FP: Does this mechanism applies to DTLS as well? The sentence above, the normative reference to DTLS 1.2, and the IANA section 8.2 would imply so. However, this is the only time DTLS is mentioned in the body of the document, excluding IANA considerations. I would like it to be made explicit that this mechanism can be used for DTLS (if that's the case) both in the abstract and in the introduction, add any useful considerations about using this mechanism with DTLS vs TLS, and possibly add a sentence in the introduction stating that DTLS uses what specified for TLS 1.X. Two examples of why that would be useful are the following sentences: using an exporter as described in Section 4 of [RFC5705] (for TLS 1.2) or Section 7.5 of [TLS13] (for TLS 1.3). For TLS 1.3, the \"signaturealgorithms\" and \"signaturealgorithmscert\" extension, as described in Section 4.2.3 of [TLS13] for TLS 1.3 or the \"signaturealgorithms\" extension from Sections 7.4.2 and 7.4.6 of [RFC5246] for TLS 1.2. which makes it clear what applies for TLS but not for DTLS. [ns] Fixed in previous change used to send the authenticator request SHOULD use a secure with equivalent security to TLS, such as QUIC [QUIC-TLS], as its as its FP: What are the implications of not using such a secure transport protocol? Why is it just RECOMMENDED and not MANDATED? nits: missing word \"use a secure with\" ; remove one of the duplicated \"as its\". (Note: this text appears again with the same typos for the authenticator in section 5) [ns] There are no known security implications of using an insecure transport here, just privacy implications. This point was debated on list (see Yaron Sheffer’s comments) and it was agreed that because there may be alternate deployment scenarios that involve transport outside the original tunnel, TLS-equivalent security is not a mandated requirement. Typos fixed in previous change. extensions (OCSP, SCT, etc.) FP: Please consider adding informative references. [ns] Added Zaheduzzaman Sarker's No Objection Thanks for the work on this document. I found it well written and I have minor comments and Nits Comment : As this document asked for a IANA registration entry with DTLS-OK, hence this mechanism is OK to be used with DTLS. I understand the heavily references to TLS 1.3 as it relay on the mechanisms described there. However, I found it odd not find any reference to DTLS1.3 (we had it on the last formal IESG telechat, it is quite ready to be referenced). Is this intentional? is it supposed to be that this mechanism defined in this document on can be used with DTLS1.2? [ns] This was a common concern, the text has been updated. Section 7.3 & 7.4: is \"active connection\" defined somewhere? it would be good if some descriptive texts are added for clarification as done for the other bullets in the same list. [ns] Wording adjusted. For the API considerations I was expecting a API to generate the certificaterequestcontext. [ns] An API would likely just return a random number, which isn’t very useful. Not defining such an API gives the application freedom to choose to use this value in some other way. I don’t have strong opinions either way. Nits: Post-handshake authentication is not defined in section 4.6.3 of TLS 1.3 [ns] Updated in prior change. Section 4 & 5: likely copy paste error -- s/as its as its/as its [ns] Updated in prior change. Lars Eggert's No Objection All comments below are very minor change suggestions that you may choose to incorporate in some way (or ignore), as you see fit. There is no need to let me know what you did with these suggestions. Section 1, paragraph 9, nit: TLS (or DTLS) version 1.2 [RFC5246] [RFC6347]. or later are REQUIRED - - TLS (or DTLS) version 1.2 [RFC5246][RFC6347] or later are REQUIRED [ns] Updated Éric Vyncke's No Objection Thank you for the work put into this document. Please find below some non-blocking COMMENT points (but replies would be appreciated), and some nits. I hope that this helps to improve the document, Regards, -éric == COMMENTS == I am trusting the WG and the responsible AD for ensuring that the specific section in references are the correct ones. -- Section 2 -- Unsure where the \"terminology\" part is... Also, should \"Client\" and \"Server\" be defined here ? Reading section 3, I am still unsure whether the \"client\" and \"server\" are the \"TLS client\" and \"TLS server\" ... In the same vein, a small figure with all parties involved would be welcome by the reader. [ns] Terminology updated to reference the terms in TLS 1.3. There are only two parties in this protocol, so I’m not sure a diagram will help. == NITS == In some places, 'TLS13' is used as an anchor rather than 'RFC8446'. [ns] Corrected. Sometimes, the document uses 'octet' and sometimes 'bytes'. [ns] Interesting. This is likely an oversight due to RFC 8446’s use of both terms. I’ve adjusted the text to try to be more consistent with that document (octets for on-the-wire, bytes for string values). -- Section 1 -- Spurious '.' in the last §. [ns] Fixed in previous change. -- Section 4 -- \"as its as its\" ? [ns] Fixed in previous change. Ben Kaduk's IESG review + Yes You can view, comment on, or merge this pull request online at: URL Commit Summary Editorial suggestions from IESG review COMMENT: I've made some (mostl) editorial suggestions in a github PR: URL Despite the overall YES ballot, I do have some substantive comments that may be worth some further consideration. Do we want to give any insight or examples into how an application might choose to use the certificaterequestcontext? [ns] I put a proposed line here. Thanks to Yaron Sheffer for the multiple secdir reviews. My understanding is that the intent of this document is to only allow \"servername\" to appear in the ClientCertificateRequest structure that otherwise uses the \"CR\" notation in the TLS 1.3 column of the TLS ExtensionType Values registry, but not to allow for \"servername\" in authenticator CertificateRequests or handshake CertificateRequests. Assuming that's correct, the easiest way to indicate that would be if there was a \"Comment\" column in the registry that could say \"only used in ClientCertificateRequest certificate requests and no other certificate requests\", but there is not such a column in the registry. I think it could also work to be much more clear in the IANA considerations of this document as to why the \"CR\" is being added and what restrictions there are on its use, even if that's not quite as clean of a solution. [ns] I agree that this is an open question. Section 1 multiple identities - Endpoints that are authoritative for multiple identities - but do not have a single certificate that includes all of the identities - can authenticate additional identities over a single connection. IIUC this is only new functionality for server endpoints, since client endpoints can use different certificates for subsequent post-handshake authentication events. [ns] Correct. TLS (or DTLS) version 1.2 [RFC5246] [RFC6347]. or later are REQUIRED to implement the mechanisms described in this document. Please clarify whether this is a minimum TLS version required for using EAs, or that this is a binding requirement on implementations of the named protocol versions. (If the latter is intended we may have some discussions about an Updates: tag...) [ns] I meant “minimum version of 1.2” and have changed the text to reflect this. Section 5.1 Using an exporter value as the handshake context means that any client authentication from the initial handshake is not cryptographically digested into the exporter value. In light of what we ran into with EAP-TLS1.3, do we want to revisit whether we're still happy with that? We should in practice still benefit from the implicit confirmation that anything the server participates in after receiving the Client Finished means the server properly validated it and did not abort the handshake, but it's a bit awkward, especially since the RFC 8446 post-handshake authentication does digest the entire initial handshake transcript. Would we feel any safer if an exporter variant was available that digested the entire initial handshake transcript and we could use that? [ns] Addressed in the open question section above. Section 5.2.1 Certificates chosen in the Certificate message MUST conform to the requirements of a Certificate message in the negotiated version of TLS. In particular, the certificate chain MUST be valid for the signature algorithms indicated by the peer in the \"signaturealgorithms\" and \"signaturealgorithmscert\" extension, as described in Section 4.2.3 of [TLS13] for TLS 1.3 or the \"signaturealgorithms\" extension from Sections 7.4.2 and 7.4.6 of [RFC5246] for TLS 1.2. This seems awkward. Do \"the requirements of a Certificate message in the negotiated version of TLS\" include the actual message structure? Because the rest of the document suggests that we always use the TLS 1.3 Certificate, but this line would set us up for using a TLS 1.2 Certificate. [ns] They do not include the actual message structure. The constraints are about the certificatelist. I’ve updated the text. START Also, per §1.3 of RFC 8446, \"signaturealgorithmscert\" also applies to TLS 1.2, so the last sentence seems incorrect or at least incomplete. [ns] Updated Section 5.2.1 Otherwise, the Certificate message MUST contain only extensions present in the TLS handshake. Unrecognized extensions in the authenticator request MUST be ignored. At least the first sentence seems redundant with the previous section (but I did not propose removing this in my editorial PR). The second sentence arguably follows from the RFC 8446 requirements, though I don't mind mentioning it again as much as I do for the first sentence. [ns] I don’t mind being explicit here considering there was some confusion about this previously. Section 5.2.2 The authenticator transcript is the hash of the concatenated Handshake Context, authenticator request (if present), and Certificate message: Hash(Handshake Context Certificate) Where Hash is the authenticator hash defined in section 4.1. If the authenticator request is not present, it is omitted from this construction, i.e., it is zero-length. This construction is injective, because the handshake messages start with a message HandshakeType. Should we say so explicitly in the document? [ns] I’m not sure I understand what injective means in this context or how it helps the reader. If the party that generates the exported authenticator does so with a different connection than the party that is validating it, then the Handshake Context will not match, resulting in a CertificateVerify message that does not validate. This includes situations in which the application data is sent via TLS-terminating proxy. Given a failed CertificateVerify validation, it may be helpful for the application to confirm that both peers share the same connection using a value derived from the connection secrets before taking a user-visible action. Is it safe to reuse the Handshake Context itself as the \"value derived from the connection secrets\"? [ns] Yes, it should be safe. Updated. Section 7.4 It returns the identity, such as the certificate chain and its extensions, and a status to indicate whether the authenticator is valid or not after applying the function for validating the certificate chain to the chain contained in the authenticator. I strongly recommend not returning the identity in the case when validation fails. [ns] Fair. Updated. The API should return a failure if the certificaterequest_context of the authenticator was used in a previously validated authenticator. Is the intent for this to be a \"distinct previously validated authenticator\" (i.e., that the API returns the same value when given the same inputs subsequently)? That does not seem to be the meaning conveyed by the current text. [ns] Yes, this is about distinct validators. Updated. Section 8.2 IANA is requested to add the following entries to the registry for Exporter Labels (defined in [RFC5705]): \"EXPORTER-server authenticator handshake context\", \"EXPORTER-client authenticator finished key\" and \"EXPORTER-server authenticator finished key\" with Don't we also need \"EXPORTER-client authenticator handshake context\"? [ns] Yes, added. Section 9 I think we should more explicitly incorporate the TLS 1.3 security considerations by reference. [ns] Yes. Section 11.1 I don't think [QUIC-TLS] or RFCs 7250 and 7540 are normative references. [ns] Updated Murray Kucherawy's No Objection Why are the SHOULDs in Section 4 only SHOULDs? Why might an implementer reasonably not do what this says? [ns] Addressed after discussion on list and in reply above. Similarly, the SHOULDs in Section 7 seem awkward, as they have little to do with interoperability. [ns] It’s true that this is about interoperability. I’m open to changing these to lower-case shoulds since they reflect implementation recommendations rather than interoperability. Nick"} {"_id":"q-en-tls-subcerts-712efc365d20c34e191fe35b898c44faf2d8d6b5ecb01c42511e5420c8db212c","text":"Just some nits.\nThe public key name and type have been changed in , as NAME suggested on the mailing list.\nNAME beat me to it!\noh looks like the travis build is failing for some reason\nNAME I have no idea why pushing that worked ...\nawesomesauce. it's probably some temporary failure while trying to fetch the new references or something"} {"_id":"q-en-tls-subcerts-2de55f8100710bd292f098a3b997d176ede8f1e47fd2f392db9726a098f3d89e","text":"Use RFC references\nMaybe use generic language, like \"signing oracle\" :)\nooh, naming things is so difficult :) Maybe \"remote certificate signing offload\"?\nPlease double-check. At least one of these appears to be wrong.\nwere you looking at the editor copy perhaps, does the latest draft appear wrong as well? URL i think the editor copy went out of sync\nAhh. I was seeing Richard as being out of date. I was also wondering if NAME wanted to consider affiliation adjustment also.\nNAME Should we update your affiliation to Mozilla here?\nRequest an extension point from the IANA experts and add it to the document.\nshipit. As these are all editorial changes, I don't think we need a long discussion."} {"_id":"q-en-tls-subcerts-f467b45eaaf38f201ed552066cd3c945b3c954f5b2e70b715c5f16ef56db57c5","text":"Addresses .\nRefactor intro to: Use case Problem description Solution from . Hi Sean, Thanks for the follow-up. I refreshed my memory as much as I could. Please see my responses in line. Yours, Daniel Sean Turner wrote: I think I would rather take the structure: 1 - Use case 2 - Problem description 3 - Solution I am fine with whatever fits best to you. Server operators often deploy TLS termination services in locations such as remote data centers or Content Delivery Networks (CDNs) where it may be difficult to detect key compromises. Short-lived certificates may be used to limit the exposure of keys in these cases. However, short-lived certificates need to be renewed more frequently than long-lived certificates. If an external CA is unable to issue a certificate in time to replace a deployed certificate, the server would no longer be able to present a valid certificate to clients. With short-lived certificates, there is a smaller window of time to renew a certificates and therefore a higher risk that an outage at a CA will negatively affect the uptime of the service. Typically, a TLS server uses a certificate provided by some entity other than the operator of the server (a \"Certification Authority\" or CA) [RFC8446 ] [RFC5280 ]. This organizational separation makes the TLS server operator dependent on the CA for some aspects of its operations, for example: Whenever the server operator wants to deploy a new certificate, it has to interact with the CA. The server operator can only use TLS signature schemes for which the CA will issue credentials. To reduce the dependency on external CAs, this document proposes a limited delegation mechanism that allows a TLS peer to issue its own credentials within the scope of a certificate issued by an external CA. These credentials only enable the recipient of the delegation to speak for names that the CA has authorized. Furthermore, this mechanism allows the server to use modern signature algorithms such as Ed25519 [RFC8032 ] even if their CA does not support them. We will refer to the certificate issued by the CA as a \"certificate\", or \"delegation certificate\", and the one issued by the operator as a \"delegated credential\" or \"DC\". I think my concern has been clarified by Ilari. It was resolved by your b). I am fine with both text. works for me. The text does not address my concern and will recap my perspective of the discussion. My initial comment was that the reference [KEYLESS] points to a solution that does not work with TLS 1.3 and currently the only effort compatible with TLS 1.3 is draft-mglt-lurk-tls13. I expect this reference to be mentioned and I do not see it in the text proposed text. If the intention with [KEYLESS] is to mention TLS Handshake Proxying techniques, then I argued that other mechanisms be mentioned and among others draft-mglt-lurk-tls12 and draft-mglt-lurk-tls13 that address different versions of TLS. This was my second concern and I do not see any of these references being mentioned. The way I understand your response is that the paper pointed out by [KEYLESS] is not Cloudflare's commercial product but instead a generic TLS Handshake proxying and that you propose to add a reference to an academic paper that discusses the LURK extension for TLS 1.2. I appreciate that other solutions - and in this case LURK, are being considered. I am wondering however why draft-mglt-lurk-tls12 is being replaced by an academic reference [REF2] and why draft-mglt-lurk-tls13 has been omitted. It seems that we are trying to avoid any reference of the work that is happening at the IETF. Is there anything I am missing ? My suggestion for the text OLD: (e.g., a PKCS interface or a remote signing mechanism [KEYLESS]) NEW: (e.g., a PKCS interface or a remote signing mechanism such as [draft-mglt-lurk-tls13] or [draft-mglt-lurk-tls12] ([LURK-TLS12]) or [KEYLESS]) with [KEYLESS] Sullivan, N. and D. Stebila, \"An Analysis of TLS Handshake Proxying\", IEEE TrustCom/BigDataSE/ISPA 2015, 2015. [LURK-TLS12] Boureanu, I., Migault, D., Preda, S., Alamedine, H.A., Mishra, S. Fieau, F., and M. Mannan, \"LURK: Server-Controlled TLS Delegation”, IEEE TrustCom 2020, URL, 2020. [draft-mglt-lurk-tls12] IETF draft [draft-mglt-lurk-tls13] IETF draft It is unclear to me what is generic about the paper pointed by [KEYLESS] or [REF1]. In any case, for clarification, the paper clearly refers to Cloudflare commercial products as opposed to a generic mechanism, and this will remain whatever label will be used as a reference. Typically, Cloudflare co-authors the paper appears 12 times in the 8 page paper. The contribution section mentions: \"\"\" Our work focuses specifically on the TLS handshake proxying system implemented by CloudFlare in their Keyless SSL product [6], [7] \"\"\" The methodology mentions says: \"\"\" We tested TLS key proxying using CloudFlare’s implementation, which was implemented with the following three parts: \"\"\" \"\"\" The different scenarios were set up through CloudFlare’s control panel. In the direct handshake handshake scenario, the site is set up with no reverse proxy. In the scenario where the key is held by the edge server, the same certificate that was used on the origin is uploaded to CloudFlare and the reverse proxy is enabled for the site. \"\"\" Cloudflare appears in at least references URL CloudFlare Inc., “CloudFlare Keyless SSL,” Sep. 2014, URL N. Sullivan, “Keyless SSL: The nitty gritty technical details,” Sep. 2014, URL works for me. works for me seems clearer. I suspect earlier clarifications addressed this. thanks that it way clearer. Daniel Migault Ericsson"} {"_id":"q-en-tls-subcerts-3918dba5054895d90af43da83276a0a4502a94c4abb9c4edc8bb2f3244e7108d","text":"Based on The delegation certificate MAY have the DelegationUsage extension marked critical. This also adds a \"strict\" flag to the extension that, if set, requires the server to offer a DC. An alternative that was discussed is to add an optional TLS-feature extension (per RFC 7633) that would provide a \"must-use-DC\" feature, analogous to \"must-staple-OCSP\". Howver, it's possible for a certificate with such an extension to be used in a handshake without DCs if the client doesn't indicate support for DCs."} {"_id":"q-en-tls-subcerts-312ead9ab8d6d08f0114cb0399887a44c7a7c41f731bda2efaca62185a1d9d37","text":"Added text to address issue\nThere is a potential issue for credential renewal. On the credential owners side there is a risk that if the owner entirely depends upon the delegated credential lifetime to renew it could miss the fact that the underlying certificate has already expired will cause clients to fail validation. This can be addressed by adding the following to section 3: \"Peers MUST NOT issue credentials with a validity period longer than the maximum validity period or extend beyond the validity period of the X.509 certificate\" Or there could be an addition to the operational section that talks about renewal. \"5.2 Credential Renewal Credentials must be renewed before they expire and before the issuing X.509 certificate expires. \"\naddressed by pull request"} {"_id":"q-en-tls13-spec-446ca989481017b27d7e25d52180098de90364725f816f99b010dd4798e4d36f","text":"Even when (EC)DHE is in use, the \"supported_groups\" extension is only mandatory in the ClientHello; it can be omitted from the EncryptedExtensions, as per section 4.2.6. Given that, it is not MTI for the server sending to the client, but the client must be prepared to accept the extension in EncryptedExtensions (at least to the point of not crashing or aborting the handshake), so leave the MTI text unchanged. It would be too awkward to try to express this subtlety in such a concise list."} {"_id":"q-en-tls13-spec-7f319c42fe953d6f9eb1086843c0347b79d4f765c07a26ff1ab43e1eb4f7abb5","text":"This change rephrases the requirements and recommendations from prescribing specific mechanisms of 0-RTT replay protection to prescribing the guarantees expected from implementations, thus letting the implementations to choose the way they can provide those guarantees. This change also introduces a MUST-level requirement for protection equivalent to having at least local strike register.\nI've addressed all editorial nits suggested (except for the comma one, which I leave to editor's discretion).\nThere are some nits noted inline (by me and others), but those are subject to editorial discretion, so I will mark my overall approval."} {"_id":"q-en-tls13-spec-fdeadd259e5028e267078ecc88a5f1a40b2b9fae7375dc27cb1c2053dfb1d484","text":"the RFC 7250 semantics that both certificate types are globally negotiated. There is no support for mixed certificates."} {"_id":"q-en-tls13-spec-3948f5bf1568a6c897c47458f23a0dfd22006828c151b2d3a007651d68b5ce2f","text":"the type is called signaturealgorithmscert (singular) everywhere, not signaturealgorithmscerts (plural)"} {"_id":"q-en-tls13-spec-da6633a418139f781e29a1862312f3ab8626d8aea7e73781e3fbb105b74369f2","text":"Given that it's common to select the strongest cipher by default (thus likely using SHA-384 PRF) and that the default Hash associated with a PSK is SHA-256, simple upgrade of server and client to TLS 1.3 may cause handshake failures if only PSK key exchanges were configured and the existing implementor's note is followed."} {"_id":"q-en-tls13-spec-fa2c7056d1e3a7b471120eb7520f045b1657c46bf725c93c4a760724ed4d1c56","text":"This clarifies two items relevant to stateless HelloRetryRequests: 1) If a HelloRetryRequest does not contain a keyshare extension, the negotiated group does not have to match the non-existent selected group in the HRR. 2) If a HelloRetryRequest does select a group, the server can still choose to use pskke key exchange. When sending a stateless HelloRetryRequest, the server may not know if it will accept a PSK yet. This allows the server to still select a group in the HelloRetryRequest which it would need for a full handshake, but later decide to accept the PSK with psk_ke mode.\nSeems reasonable to me."} {"_id":"q-en-tls13-spec-d8806608ec25dc6448f20f06db21302159bf54353748cd4d2781ee03d5d1b8e5","text":"As naïve handling of external identities may cause the server to be vulnerable to identity enumeration attacks, refer the section that discusses that danger in the PSK section itself."} {"_id":"q-en-tls13-spec-0fb215f42f8efa8e3cbbf98027a51eea0ecfefb8e0f7bb4ba4d45d28b17f9b71","text":"closenotify has always used a warning alert level, but this is not actually written down anywhere. It seems to have gotten lost as early as RFC 4346 (TLS 1.1!). In RFC 2246, there is some text that mentions the correct level is warning as an aside in describing something else. I wasn't able to find any other text that discussed this. Then, RFC 4346 dropped the session termination behavior: But in doing so, it dropped any mention of which alert level to use. That text has carried over to RFC 8446 as: In RFC 8446, we said alert levels no longer matter and can be \"safely ignored\", but this still leaves unspecified what the sender should do. Skimming implementations, both BoringSSL and NSS will treat \"fatal\" closenotify as an error, so using \"warning\" is also necessary for interop."} {"_id":"q-en-tls13-spec-b2174edbaca39bc5aa7d33fcc6c344fb1656d98318b425590186c7cc61c55b8e","text":"The last few years I have personally experienced several occasions where people in SDOs as well as developers believe that TLS magically gives them authentication. This has resulted in security standards that just writes that \"TLS is used for mutual authentication\" without any more details as well as implementations that do not provide any identity verification at all and instead just doing a full path validation, when they should have checked the server identity, that the right trust anchor was used, and that the right intermediate CA was used. I think this is a common problem. Reading RFC8446, I think it is quite easy for people without expertise in security protocols to get the understanding that TLS gives you authentication. Suggestions: I think RFC8446 should be clearer with that what is provided by the TLS protocol is transport of authentication credentials and proof-of-possession of the private authentication key. I think the requirements on the application using TLS should be clearer. Maybe: \"Application protocols using TLS MUST specify how to initiate TLS handshaking, how to do path validation, and how to do identity verification.\" I think it would be very good for readers if draft-ietf-uta-rfc6125bis was given as an informative reference."} {"_id":"q-en-tls13-spec-d2fa86787c1ff1dac1f69cd04e926980ad142a3ddbfbbe76f5ce5f689bffd1bb","text":"Merging PR 100 added to the changelog at draft-04, but we're up to draft-06 now."} {"_id":"q-en-tls13-spec-5dc113692ac2601afda3ee6d8c6490c8aa50bc0d7f414ad060d19ea48c540295","text":"Not sure if the deprecation cutoff will be at 224 bits (~112 bit security level) or 255 bits (~128 bit security level) yet, so I've just done the former for now. This drops the old curves under 224 bits, updates the extension example bytes, and deprecates MD5.\nAs noted on-list, SHA1 can't be as easily dealt with here, so that'll be a topic for a separate changeset.\nHere's a PR for this: Question: Are we dropping all curves <224 bits or <255 bits? (was consensus ever called on this specific number?)\nThere was not a specific WG consensus call on this point. I hope we don't need one and that we should keep the curves in that are used and trim the ones that aren't.\nGood. In that case, I'll slash everything other than secp256r1 (prime256v1 / NIST P-256), secp384r1 (NIST P-384), secp521r1 (NIST P-521), and sect571r1 (NIST B-571). (plus, of course, new CFRG curves yet to be added) PR updated. URL URL\nBased on the fact that apparently precisely zero servers actually use SHA-224 for signatures, can we drop that too while we're at it? (not an exaggeration; the scan cited in the last comment shows none)\nI would drop it. spt On Jun 17, 2015, at 20:09, Dave Garrett EMAIL wrote:\nOK. PR updated.\nNAME This can be closed now that PR has been merged."} {"_id":"q-en-tls13-spec-d6d12196a428903da664c896d7472a420d56ecffba6994a277cd4a4917338b4d","text":"NAME Quick PR for fix noted for PR plus a couple other small nits. Still on draft 08. Fix a typo. (\"permittion\" is not a word) Consistently use spaces around the \"+\" in \"2^14 + 256\". (some did; some didn't)"} {"_id":"q-en-tls13-spec-40a39bbb49bbaf3b76a5962a6044b088b632e3eef5d2157ed4c936a923e31e72","text":"I think that the decision was to make this mandatory, since present-but-empty is harder to screw up.\nSection 6.2 Figure 1 and others show messages as non-optional. Section 6.3.3 however states that: It seems that the message could indeed be omitted if the server doesn't need to respond with any extensions, but I may be wrong here. We should fix one of the two places though as they contradict each other."} {"_id":"q-en-tls13-spec-be3ca54660a83764bb26487f08384ef17303257b98b1a159e7be6bce6bd685f2","text":"The text & struct have different names for the config ID. Use the one in the struct."} {"_id":"q-en-tls13-spec-66f4b50d220860203c0be4fc03b389db3ad8b607126cf91dcafc264044bced08","text":"You'll also need to update the last paragraph of the MTI Extensions section (currently section 8.2). Now that we're going to be down to only one client-only extension defined in this document, that whole paragraph can probably be cut in favor of a one-off note in the remaining extension's section (signature_algorithms will be the last).\nSo that the clients can learn about what groups a server supports. The idea here is to allow a client to offer P256 but then learn that a server supports 25519 for next time without the server having to reject."} {"_id":"q-en-tls13-spec-18a29b6796ca64895f77ceb03d190c22f40303d4f462c9027ed71e9cd7732376","text":"Some small tweaks and fixes for issue : Move HKDF-Extract citing to a sentence and bring HKDF-Extract with it. Change the Messages arg to HashValue, as the actual hashing is stated in the definition of Derive-Secret which is passing this function a hash, not messages. (fixes issue ) More concise defining of HkdfLabel. Use the word \"zeroes\" instead of \"0s\". Edited to add: Also, appended a commit to fix an extra trailing paren.\nThe other possibility is that a hash of the concatenation of hashes was to be the new HashValue. Simplest change is just to make the arg for what's currently used in the definition and passed in.\nNAME Rebased. Is this ok to merge now?\nIt never actually says that hash_value is H(messages).\nThis should be clear now."} {"_id":"q-en-tls13-spec-64e4c49073b1d3917e575998ce1abfc91ed4ddc928f86d497f2eb9c3cb41dd59","text":"We have a registry called \"supported groups\", and the extension is \"supported groups\"; but each element is called a \"named group\" and the data passed in the extension is a \"named group list\". be consistent in our terminology."} {"_id":"q-en-tls13-spec-e6c6b5677a5be0bcec815708980600eb4f479b99f6c4be3a835597fa31c76fa9","text":"On Tuesday, October 11, 2016 08:14:32 pm Martin Thomson wrote: or missing_extension\nI think that missingextension would be OK, but we would hit the lower-level decodeerror first, since HRR is defined with:"} {"_id":"q-en-tls13-spec-dcc27ba01328c991a23c6d78b1c7b1b3826cde9ae3cc1b25dcdbea3d45194862","text":"Variants is not a subsection of Constants; just have them all be subsections of same depth within Presentation Language URL"} {"_id":"q-en-tls13-spec-9759903d1f7bb23b678beee9f8b153afcf98666dc1a4502ee1eea6100359b32d","text":"The handshake context here is just the transcript of the handshake messages, not the hash of the transcript. The language makes it seem like the authentication messages are Hash(Hash(handshake messages) + Certificate), etc."} {"_id":"q-en-tls13-spec-fb3a5cb93a8e6f156524b597978bfe14b3f7417062f733edf2deb642bf8172e7","text":"Move the list to the new transcript hash section. Note that EndOfEarlyData goes in the client Handshake Context.\nlgtm"} {"_id":"q-en-tls13-spec-86a1cfc8122dc7384978a0f4ffe93d67ce0221d599ea4f47020a14767d37be1c","text":"Per mailing list discussion, this allows us to have every HKDF-Expand just have one hash block of info. NAME would love your feedback.\nShouldn't \"TLS 1.3, \" be shortened to \"tls13 \"? Without that, several of the labels would seem to blow blocks. Also, \"derived secret\" seems unchanged, and is too long to fit into a block, even with \"tls13 \" prefix.\nOops. Good catch. Script fail.\nNAME fixed?\nYeah, I'll add a note. I still have an issue for the ChangeLog so I will add this as well\nAlso, minimum label size looks to be still 10, presumably should be 7 with 6-byte prefix (as it 10 with 9-byte prefix)."} {"_id":"q-en-using-github-7fed6392648a0d6fc52616c1d3c946722132288b6b9eafd88627ac7d1a6a367c","text":"I was tempted to remove the entire section on administrative policies, though left it as is. NAME what do you think?\nYeah, I figured it best to start small. Let me know if you need anything else before merging!\nNAME Interesting that the circle CI build ran on the merge, but not the PR. I didn't see any build failures reported before it merged, but I just got the email reporting a missed reference. Maybe there are some tweaks you need to apply over there as owner of the org.\nYeah, I'll fix it. Thanks for letting me know!\nI was thinking of being a little more aggressive about removing some of the text, with matching PRs for the other work, but this works."} {"_id":"q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71","text":"Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM"} {"_id":"q-en-version-negotiation-efb32c2eadc700e176f076819ada0e772f4fb07a3c6a7d3ecd64fd5b3eba3687","text":"I am not sure that I agree with this as some state carries across from the first connection attempt to the second (and hopefully successfully completed) connection. (Also, I've implemented this within the same connection object. I know that other implementations differ here, so what I'm looking for is a little finesse in how this is framed.)"} {"_id":"q-en-version-negotiation-ef8166e63839cd09eeae85fc8b391d01d274afbd25c1b8bb157ab561871cc5a0","text":"URL in this example, it would be very stupid of the client to select version C and start another negotiation after the sever has indicated D is what it prefers. The server can again select D as preferred and there exits a potential loop. However, it is possible. What are the rule we have in the specification that stops this kind of loop?\nI'm not sure that I understand this question. A client makes a choice about what version to attempt (and offer) when it creates a connection. It might learn that its versions are incompatible with a server (with a Version Negotiation packet) and try again, but that retry would be a separate action. But if the connection is successfully established, the negotiation is done.\nThere is at most one round of incompatible VN followed by one connection that may or may not use compatible VN to switch versions again. There is no potential for a loop here. NAME can you clarify what you meant?\nThe scenario i was thinking is a follows - Chosen = A, Other Versions = (A, B) -----------------> <----------------- Chosen = G, Other Versions = (G,H) ........ the draft only says the client's first flight contain This contains the list of versions that the client knows its first flight is compatible with (A,B). It does not force the client to list ALL the compatible versions, so it might not list all the compatible versions. I think, there is also no requirement on the server to list ALL the compatible versions. Now server sends back another list of compatible versions (D,C). This new list triggers the client to realise it knows about versions that are compatible (E,F), and sends it back to the client as it has not interest in D or C. That trigger the server to reply with new list of compatible version (G,H).. and this goes on. It is also possible that client made a mistake by not listing all the compatible version (well it does not have to) and as it is not interested in D or C, it wants to list (E,F) and keep in negotiating. I would like to understand the with current specification this kind of interaction is not possible and the negotiation should not go on like that.\nThe scenarios you describe aren't allowed: if the server performs compatible version negotiation it MUST select a that was included in the client's :\nyes, but servers are allowed to do incompatible version negotiation. in my 1st scenario, the server initiates incompatible version negotiation, the server selects D instead of C as chosen by client. The expectation here is that D will be the final negotiated version, and the version negotiation will end. But is there anything stopping the client to try to enforce it's chosen one (C) again? shall we have some to avoid this case? in the 2nd scenario, it is not possible if A, B, C,D, E,F all are compatible. if the server starts incompatible version negotiation with (C,D) and client does not find any supported version, it aborts the connection. I just realized - there is no normative text for aborting the connection attempt.. As the server expresses all the versions it support in the supported versions list, the client knows there is not point on continuing with another attempt with (E,F). Unless I am mistaken there is nothing now prohibiting the client to do so. It can continue to do so.\nThat doesn't make sense. The server initiates incompatible version negotiation by sending a VN packet, there's no notion of chosen version in the VN packet. There's no need for normative text when there's nothing else a client can do, are you saying the client could do something wrong as opposed to aborting due to lack of versions? The client will ignore any VN packet apart from the first one:\nright, my bad. I think I have drawn the picture wrong. Server send (D,C) and the client choses C but the server choses D. does this make sense? and the main issue goes away? sniped from my previous comment. >The expectation here is that D will be the final negotiated version, and the version negotiation will end. But is there anything stopping the client to try to enforce it's chosen one (C) again? shall we have some to avoid this case? yes . and should we try to prevent it? OK, I see the client ignores, but what does happen then ? does it abort the connection attempt ? Is it possible for the client to start another connection attempt with complete new original versions and a new set of other versions?\nZahed and I discussed this first scenario offline. In this scenario, the client indicated that it was happy with D by sending it in its Other Versions, so the client will be happy when D is negotiated even if it preferred C. If the client didn't like D, it could have removed D from Other Versions and then the connection would have negotiated C. Fair enough, this can be fixed with a small tweak - see below. Zahed and I discussed this second scenario offline as well. I wrote up to make the incompatible VN text more normative and that addresses Zahed's concern here. Now we have normative text to guarantee a failure in that scenario. If the client wants to start a separate connection attempt later, it will restart the algorithm from the top. We explicitly decided as a WG to avoid telling the client to remember information from previous connection attempts because VN packets don't carry an expiration date so caching them is fraught.\nLGTM"} {"_id":"q-en-version-negotiation-b2f5214e8941bdc237382d9784438eaacd2cfc4ba841837d72af65f9c5dc818e","text":"This was raised during the Reviewer: Qin Wu Review result: Has Issues I have reviewed this document as part of the Operational directorate's ongoing effort to review all IETF documents being processed by the IESG. These comments were written with the intent of improving the operational aspects of the IETF drafts. Comments that are not addressed in last call may be included in AD reviews during the IESG review. Document editors and WG chairs should treat these comments just like any other last call comments. This document describes a version negotiation mechanism that allows a client and server to select a mutually supported version. The mechanism can be further broke down into incompatible Version Negotiation and compatible Version Negotiation. I believe this document is on the right track and but worth further polished with the following editorial comments. Major Issues: No Minor Issues: 1. Section 4 introduce verson downgrade term [Qin]What the version downgrade is? I feel confused, does it mean when I currently use new version, I should not fall back to the old version? Can you explain how version downgrade is different from version incompatible? It will be great to give a definition of version downgrade in the first place or provide an example to explain it? 2. Section 9 said: \" The requirement that versions not be assumed compatible mitigates the possibility of cross-protocol attacks, but more analysis is still needed here.\" [Qin] It looks this paragraph is incomplete, what else analysis should we provide to make this paragraph complete? 3. Section 10 [Qin]: I am not clear why not request permanent allocation of a codepoint in the 0-63 range directly in this document. In my understanding, provisional codepoint can be requested without any dependent document? e.g., Each vendor can request IANA for Vendor specific range. Secondly, if replacing provisional codepoint with permanent allocation, requires make bis for this document, I don't think it is reasonable. Nits: 1. Section 2 said: \" For instance, if the client initiates a connection with version A and the server starts incompatible version negotiation and the client then initiates a new connection with .... \" [Qin]Can the client starts incompatible version negotiation? if not, I think we should call this out, e.g., using RFC2119 language. 2. Section 2, last paragraph [Qin] This paragraph is a little bit vague to me, how do we define not fully support? e.g., the server can interpret version A, B,C, but the server only fully implements version A,B, regarding version C, the server can read this version, but can not use this version, in other words, it is partially implemented, is my understanding correct? 3.Section 2.1 the new term \"offered version\" [Qin] Can you add one definition of offered versions in section 1.2, terminology section? To be honest, I still not clear how offered version is different from negotiated version? Also I suggest to add definitions of accepted version, deployed version as well in section 1.2? Too many terminologies, hard to track them in the first place. 4. Section 6 said: \" it is possible for some application layer protocols to not be able to run over some of the offered compatible versions. \" [Qin]I believe compatible versions is not pertaining to any application layer protocol, if yes, s/compatible versions/compatible QUIC versions 7.1 said: \"For example, that could be accomplished by having the server send a Retry packet in the original version first thereby validating the client's IP address before\" [Qin] Is Version first Version 1? If the answer is yes, please be consistent and keep using either of them instead of inter-exchange to use them. s/version first/version 1"} {"_id":"q-en-version-negotiation-8d5fc5cb4c37d6f27d718c5cdb6a88a0e803bca9af1a4c9650ce32e84d2a2d52","text":"Bijective should be symmetric. We're describing a relation not a function. This is a very minor comment, will send a PR hopefully soon.\nIndeed, my math is pretty rusty :) Totally agree with you, and thanks for offering to write the PR!\nNAME any luck with the PR?\nI completely forgot about this!\nURL opened\nClosing now that has been merged."} {"_id":"q-en-version-negotiation-33a1b353eba25e5d55b1cc725fde52a50b9a50c1faffa62a8568326850a76b30","text":"The draft says explicitly that receiving Version Negotiation terminates a connection attempt. This isn't consistent with that. This probably needs to say something like this instead:\nThat sounds good to me, can you make this a PR?"} {"_id":"q-en-webpush-protocol-41bb731bd11482c4d272e87319747e521edd79021844cf6f53b49c3874974c9b","text":"NAME suggested this. It's a simplification that should reduce errors. It prevents topics like \"Hello world!\", but that's probably a good thing on balance."} {"_id":"q-en-webpush-protocol-810f3358733a5d7d0740a45c08feb4dd256b9717025160b2f1d54c788efa9219","text":"This draft attempts to address all of the issues identified during with the exception of Dealing with overload situations where further guidance has been requested from early implementers of webpush. Hi, This review is an early, i.e. pre IETF Last Call TSV-ART review. The TSV-ART reviews document that has potential transport implications or transport related subjects. Please treat it as an IETF Last call comment if you don't want to handled it during the AD review. Document: draft-ietf-webpush-protocol-09 Title: Generic Event Delivery Using HTTP Push Reviewer: M. Westerlund Review Date: Sept 27, 2016 Below are a number of comments based on my review. The one with transport related subject is the overload handling in comment 4. Section 3: So the Security Considerations do require use of HTTP over TLS. However, I do wonder if there would be a point to move that requirement up into the relevant connection sections, like 3. Especially as when one reads the confidentiality requirement in Section 4 for the message one wonders a bit why it is not explicitly required in section 3. Section 5.2: TTL = 1*DIGIT Shouldn't the upper range for allowed values be specified here. At least to ensure that one doesn't get interoperability issues. Section 7.2: To limit the number of stored push messages, the push service MAY either expire messages prior to their advertised Time-To-Live or reduce their advertised Time-To-Live. Do I understand this correctly, that theses options are push service side actions that is not notified at that point to the application server? Instead it will have to note that the message was early expired if it subscribes to delivery receipts? Dealing with overload situations Reviewing Section 7.1 and other parts I find the discussion of how overload situations of various kinds are dealt with missing some cases. So the general handling of subscriptions are covered in Section 7.1 and with a mitigation of redirecting to another server to handle the new subscription. What I lack discussion of are how any form of push back are handled when first the deliver of push service to UA link is overloaded. Is the assumption here that as the push service can store messages the delivery will catch up eventually, or the message expires? How does one handle a 0-RTT messages when one has a queue of messages to deliver, despite having a UA to Push service connection? The second case is how the push service server can push back on accepting new message when it is overloaded. To me it appears that the load on a push service can be very dynamic. Thus overload can really be periods. Thus, what push back to application servers is intended here? Just, do a 503 response on the request from the application server? I do note that RFC 7231 does not appear to have any guidance one how one sets the Retry-After header to spread the load of the retries. This is a known issue. And in this context with automated or external event triggered message creations the push back mechanism needs to be robust and reduce the issues, not make them worse. I would recommend that some recommendations are actually included here for handling overload from application server message submissions. Life of subscription in relation to transports I find myself a bit perplexed of if there are any relation between the subscription and the relation to an existing transport connection and outstanding request, i.e. the availability to receive messages over push. I would guess, that there are no strict need for terminating the subscription even if there are no receiver for an extended time. However, from an application perspective there could exists reason for why a subscription would not be terminated as long as the UA is connected, while any longer duration of no connections could motivate the subscription termination. I personally would like a bit more clarification on this, but seeing how these issues are usually handled around HTTP, I would guess the answer, it will be implementation specific? Still there appear to be assumptions around this, why it wouldn't matter that much, but that is only given that one follows these assumptions. Unclarities which requests that require HTTP/2. The specification is explicit that some request can be performed over HTTP/1.x. However, it is not particular clear which requests that require HTTP/2. I assume all the GET requests that will hang to enable PUSH promises needs to be HTTP/2? Maybe this should be clarified? Cheers Magnus Westerlund Services, Media and Network features, Ericsson Research EAB/TXM Ericsson AB | Phone +46 10 7148287 Färögatan 6 | Mobile +46 73 0949079 SE-164 80 Stockholm, Sweden | mailto: EMAIL"} {"_id":"q-en-webpush-protocol-9bf1e1716274ed23bee76afdf13d829f3c44608897629856a37b644faf290806","text":"closed closed renamed draft-thomson-webpush-URL to draft-ietf-webpush-URL updated the HTTP/2 reference updated the alt-svc reference\nFrom Darshak Thakore feedback: Add Push-Receipt header to Section 5 example Reference in Section 5.1\nFrom Darshak Thakore review: Section 5 - In the example, Is the post operation to the right URL? Is that just an editorial oversight or is that the intended URL? Current example: POST /p/LBhhw0OohO-Wl4Oi971UGsB7sdQGUibx HTTP/1.1 Probably should be: POST /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV HTTP/1.1 The current example is incorrect. It should match the urn:ietf:params:push link relation returned in the Subscription response - /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV - and distributed to the application server."} {"_id":"q-en-webrtc-http-ingest-protocol-1cf6cad8cd77ed2ba14cf27cd0770fcc766aa693ab86bed2c202c39090e78456","text":"The text says: \"The media codecs used in older protocols do not always match those being used in WebRTC, mandating transcoding on the ingest node,\" I don't understand this sentence. If I use an \"older protocol\", e.g, RTMP, why do I need to care about WebRTC codecs?\nThat refers if you use RTMP for ingest and WebRTC for viewing.\nrewriten the editorial in\nThe text says: \"These protocols are much older than webrtc and lack by default some important security and resilience features provided by webrtc with minimal delay.\" I assume you refer to RTMP, which is mentioned in the previous Section. Doesn't RTMPS solve some of the security issues? But in general, if you want to put up WHIP against RTMP, I think you need a little more text. Also, as WHIP is supposed to be an \"easy to implement\" protocol, I think it would be fair to also list some of the functions NOT supported compared to e.g., RTMP. As I have mentioned before, there is no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME.\nIt is more referring to RTSP which lacks DTLS support. no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME. RTMP protocol was intended as an RPC mechanism between the flash player and server. It allowed to create custom apps both on client and server side, which supported a wide range of methods using AMF. However, current \"RTMP protocol\" implemented for ingesting is a reversed engineered version of the implementation done by the FMS and it lacks support for PAUSE/RESUME and the publication is started from the start of the RTMP connection (same as whip).\nrewriten the editorial in"} {"_id":"q-en-webrtc-http-ingest-protocol-e23bd94e323460c90cffe5aaf00f5d857e011651bca3d2e55b3b8f5dcf2e2a75","text":"NAME ping?\nSection 4.1 “The WHIP client MAY perform trickle ICE or ICE restarts ] by sending an HTTP PATCH request…” Why is there a reference to RFC8863, which specifies ICE PAC, here? In general a reference to ICE PAC is useful, because this draft allows an offer without any candidates. So maybe that reference should go at the end of: “The initial offer by the WHIP client MAY be sent after the full ICE gathering is complete with the full list of ICE candidates, or it MAY only contain local candidates (or even an empty list of candidates).”\nI think it was a typo.\nI created a PR to fix the tricke ice reference and add the pac one as suggested."}