{"_id":"q-en-7710bis-18089d5a8413692b2c416e1df3d2d3bf04184bdc0ffbfde974fbc7b585d7b9dc","text":"…ptive-portals/vIDCvkyKXdvMGjwvOHuTsSgFzt8/\nLGTM, for this change to address 1/2 of the referenced comments."} {"_id":"q-en-ace-dtls-profile-5345ea0e50c8f08b4314e4cc5284486c9d6eebdf710baff56b2417b93377af64","text":"This change addresses the issue raised by NAME among the authors that it is worth mentioning This is already addressed in the framework document but to avoid implementation errors it seems worth mentioning here as well.\nMerged as two other authors have agreed."} {"_id":"q-en-ace-oauth-5d9f01555b374c47906823be5c7883295e9ffed9b618c0904e1e88417de150c8","text":"I removed the part about the DoS via AS Discovery hints since it was unrealistic.\nI changed the text. Text added. Text added. Here is the text from Section 7.2 of draft-ietf-ace-dtls-authorize: URL All editorial comments addressed.\nPR addressing comments can be found here: URL\nMerged"} {"_id":"q-en-ack-frequency-482db15bdba88c51025764422f5a6d6c5ab7f4c8af24a8418cdd2a3536a25540","text":"I don't fully understand when would someone want to set ignore-order = true, i.e., what is a valid use case for this? Lets say for whatever reason the sender sets this to True, this would if there is actual packet loss and receiver receives OOO packets, it won't send an immediate ACK. Meaning the sender won't enter loss recovery immediately and will continue sending packets at the same rate. This is not good, as it directly increases queuing on the network node and for a tail drop queue (most widespread deployment), it will also affect other flows. Meaning, other flows will experience packet loss due to non-responsiveness of this flow for certain period of time. I probably missed past conversations about ignore-order field, so my apologies if this has already been discussed. I read the draft and didn't see any explanation about where would this be useful and what are the fallbacks of setting this to true.\nOne use case that I know and care is for sending QUIC traffic from a server cluster. Some packets would be sent directly from a server node that governs the connections, but others would be sent via different nodes within the same cluster. Once we deploy a sender like this, the receiver would observe what looks like reordering (because most packets will be essentially delivered through an additional hop, and that would cause delays). But we do not want the receiver to send ACK for each packet being received. Regarding the problem, I'm not sure I agree with the impact. Typically, congestion controllers react to congestion after 1 RTT. When packet-number-based detection is turned off, that changes to 9/8 RTT. That does cause more pressure on the bottleneck queue, but it is no worse than that caused by a path with an 9/8 greater RTT. Assuming that congestion controllers are designed to be fair against flows having different RTTs, I tend to believe that this is a non-issue.\nI agree with NAME though I think adding a SHOULD NOT use a delay longer than an RTT(URL) would be a good addition, since if the delay is extremely large(ie multiple RTTs), that could be bad.\nIn general, the spirit of this draft is to provide control over acking behavior to the sender, since it is the consumer of this information and ought to be able to control how frequently it needs to see this information. If there are subtle consequences to these decisions, it is worth noting them in the draft. It seems useful to add some text on the consequences of using Ignore Order, so that senders can make informed decisions. We can do that.\nNAME is this something that Fastly does or is planning to do? How would you share the connection secrets between different server nodes? When we set ignore-reorder = true, and lets assume that max ACK delay > 1RTT, it can be 1RTT or 2 or 10RTT. this would result in loss recovery to be delayed by at least an additional RTT, i.e. it will take >16/8RTT (>2RTT) instead of 9/8RTT to declare packets lost. That is significant IMO. >Assuming that congestion controllers are designed to be fair against flows having different RTTs, I tend to believe that this is a non-issue. CUBIC throughput is inversely proportional to RTT in New Reno mode. Nonetheless, I am more concerned about high queuing in bottleneck due to late loss recovery. Specifying the use-cases where ignore-order = true SHOULD be used will prevent new implementors from using it randomly. And as Jana said, some text around the consequences when it is used would allow informed decision making.\nNAME I do not speak for Fastly's plan, but the OSS branch of H2O has a working prototype that does this: URL The calculation is correct. At the same time, I do not think that senders will be willing to use Max Ack Delay as large as that, when setting the Ignore Order bit. The problem here is the impact on loss recovery. If Max Ack Delay is set to a value as large as 1 RTT, then the sender needs to spend more than 2 RTTs for detecting a loss. That means that the size of the send and receive buffers have to be twice as large, and that it would take as twice as long to recover from a loss. Based on existing advice that we have in other RFCs, I think we should discourage Max Ack Delay that is greater than 9/8 RTT when Ignore Order bit is used (please see my comment on ).\nI'd suggest 5/4, but otherwise agree with you NAME\nShould the Max ACK Delay be restricted to 1/8 or 1/4 RTT as there is inherent 1RTT time spent while sending data and receiving ACKs? That would make a total of 1 + 1/8 or 1 +1/4 RTT total/\nI think this is best as a sender-side SHOULD, since it knows what time threshold it's using. For example, we use 1/4 RTT, but have considered making it adaptive(ie: 1/16 RTT to 1 RTT) based on observed reordering. Also, if your RTT is 100us, and your timers have coarser granularity, 1/8 RTT or 1/4 RTT just doesn't make sense. I'm not sure what does, because I'm not an expert in datacenter networking, but that's why I think this is a SHOULD, not a MUST. Of course, it's also unenforceable, which is a good reason for a SHOULD.\n+1 to SHOULD. The only argument I might disagree with NAME is that it is unenforceable (IMO a receiver can enforce by capping Ack Delay to a fraction of the RTT that the receiver has observed). But I do not think there's any reason to require the receiver to enforce, considering the fact that a sender can send as many packets as it likes, regardless of ACKs the receiver sends (or lack of).\nI am fine with SHOULD and with a mention of how a sender would compute Max ACK Delay when he sets ignore-order = true. An example is fine too.\nI believe this is the same issue as\nThere are two things that are impacted by setting a high value for ack delay: congestion control and time-threshold loss detection. On congestion control, we need some protection for the network, so we should recommend something. On loss detection, a sender that chooses and sets an ack delay higher than the time-threshold loss detection time period risks triggering unnecessary retransmissions. So, I propose adding the following for a sender: SHOULD set the ack frequency to be at least one ack per RTT (so that congestion control responses are reasonably quick) a word of caution that setting the ack delay to be longer than time threshold can cause the sender to unnecessary retransmit\nIn 9.3. Window-based Congestion Controllers I thought that unless there is loss reported, a QUIC ACK Frame releases sending window. In a similar manner as the accurate byte counting style in TCP, a QUIC sender solely operates on the basis of bytes acked, not the number of ACK frames received. So while a delayed ACK could delay a round of growth when the ACK Ratio is larger, it is only delayed by the time to ACK the set of received packets. In looking at various QUIC CC over medium and longer path RTTs this effect was quite small, and the default ACK delay was not unreasonable; for shorter path RTTs this might be different\nSee PR\nTo me, ACK Clocking is a separate point that applies to any type of CC when the cwnd limits the sender. I agree there could be a concern is around whether a less frequent ACK policy can induce a cwnd-limited sender to send bursts of packets that could induce loss or disrupt sharing of the path with other flows. The QUIC specification already permits an initial window of 10 packets, and motivates the need for pacing. I think it is important this text refers to the QUIC transport section on pacing to mitigate bursts.\nBetter worded perhaps in PR .\nSection 9.1 - There are CC considerations with respect to how long it is reasonable for a flow to hold-off detecting congestion and responding. This needs to be discussed. When an endpoint detects persistent congestion, it MUST promptly reduce the rate of transmission when it receive or detects an indication of congestion (e.g., loss or ECN marking) [RFC2914], the Ignore Order value of true (0x01) in this ID allows a sender to extend that period, postponing detection of loss. That might be reasonable, but excessive delay can be dangerous - and therefore the impact really needs to be discussed: delaying a fraction of a RTT is in my mind safe, intentionally delaying by an RTT is arguable. A delay of many RTTs is endangers other flows, and we need to at least say that in some way ([RFC8084] took an alternate view of how long might be safe).\nSee PR\nI think this warrants a note about how a sender SHOULD NOT use a delay that is larger than an RTT unless the sender has other information about either the network or current network conditions. Otherwise, the receiver does not need to enforce anything here.\nNAME Delay of loss detection has negative impact on loss recovery as well. Considering that the time threshold defined in RFC 9002 is 9/8 RTT, I might ague that, when Ignore Order is set to true, then the maximum delay being advertised SHOULD be no greater than 9/8 RTT. These values provide responsiveness comparable to Rack when a packet is lost - the loss would be detected within 5/4 RTT. (stuck out, as this point is already covered by Section 8)."} {"_id":"q-en-ack-frequency-2d928aa6d61774c22f29b0ab1c12d4cf2139337fe894d8a6e525a7e77f4700e0","text":"Defer to RFC9000.\nThe text says: \"For performance reasons, an endpoint can receive incoming packets from the underlying platform in a batch of multiple packets.\" is this the network layer... or something else?\nIt could be the OS or the hardware.\nI think the sentence needs some work - I can believe the network; interface; operating system; etc. Are examples of reasons why there is bunching of packets so that a batch of packets arrive at the receiver. The current text doesn't really say quite what I'd expect."} {"_id":"q-en-ack-frequency-705b65c091296cf9ac80add81ed62da6beb6c1e03d16f9d7cfc9e4af8f97a868","text":"I think the motivations section as-is is too long. It spends a lot of time talking about what are effectively the tradeoffs. IMO the motivations section should specify why it's useful to have the sender control the receiver's ACK frequency, and then we can have a separate section for the tradeoffs. This takes a stab at that section, lifting things from motivations and adding some new text specifically addressing the nascent stages of a connection. I also made some editorial changes to the existing text. I am happy to add any additional considerations we think might need to be called out. This attempts to\nThanks for the PR, NAME ! I like the new text you propose (I've proposed some changes), but I think you might be assuming that ACK_FREQUENCY is only used for delaying acks further. An important point that bears bringing out in the motivation is that a sender might want more frequent acks as well, for example during startup. That's worth capturing in the modified motivation section.\nNAME this PR is almost done, any chance you can do another round of updates and we can get it in?\nNAME sorry, I'll do some updates today.\nNAME NAME I've updated the PR, I'll watch and iterate on any remaining feedback quickly this time.\nthanks NAME Something about vim and the macbook pro keyboard apparently makes me incapable of spelling things properly. I want my desk keyboard back\nAs discussed on in the base draft, we have some confidence in a particular heuristic (default tolerance of 2 for the first 100 packets, followed by a tolerance of 10) which we know to work fairly well in multiple deployments serving \"typical\" Internet resources to typical users. Do we want to make that recommendation explicit, or otherwise give guidance in this extension?\nI think it's a good idea to describe the values for which we have experience, while encouraging experimentation. We don't need to recommend the strategy, we just need to document it as an example strategy.\nI think a section on considerations and maybe past experience would be very helpful. I think it's going to be a bit of a challenge to write, but I'm happy to take a stab if no one else wants to.\nWhat I'd want to see is just a simple explanation of what a sender could do... I would not describe results from experience, since those are anecdotal and are going to change over time. This could be a fairly short section.\nNAME NAME I can take a stab at writing something and we can go from there.\nThanks, NAME !\nI do think the pitfalls of e.g. should be conveyed in such \"cautionary\" text as well. I will submit a PR this week time permitting between interop.\nThanks, NAME"} {"_id":"q-en-ack-frequency-654a9fbe49099e8b245a27785590be2d36a9134c46a360b6cb0956dd7ae0805b","text":"Also updates the transport parameter codepoint to 0xff03de1a from 0xff02de1a.\nWhile reading the latest ack-frequency draft, I am confused with the wording that was added in URL If I have ack-eliciting threshold = 1, to me it means that I can receive 1 ack-eliciting packet before sending an ACK, i.e. I have to send ACK after receiving 1 packet. But looking at the example of 0, this interpretation looks wrong. NAME suggested, maybe the \"before\" needs to be \"without\" and an additional sentence added: \"An immediate acknowledgement is sent when more than this number of packets have been received.\" I agree with his suggestion.\nThanks for the issues, this is a bit unclear. PTAL at\nThe definition that I use internally is \"the number of packets you can receive without sending an immediate acknowledgment\". This is different than saying \"please acknowledge every N packets\". The main benefit of this definition is that there are no invalid values for the number. I recommend making this adjustment in the frame definition: it will be more efficient (by a tiny amount), but it also removes the need to validate the value.\nThis seems like a good change, it's just a matter of writing the text."} {"_id":"q-en-ack-frequency-7d1a3fef32b58666b2b17de48630d2bdf8e413d0b6ba783d206862ce28aceca3","text":"by adding a one byte IMMEDIATE_ACK frame.\neditorial nit: my preference would be for frame diagrams to use the notation in QUIC transport. I see that ACK_FREQUENCY is defined using ASCII art, so perhaps you want to consider a separate follow up PR to switch styles after this lands.\nThanks Lucas, I filed to update them.\nThey currently use ascii art. Will write this up once is merged.\nsgtm, thanks!\nWhen sending a PTO packet, a sender wants to induce an immediate ACK. The only trick I know to induce such an ACK is intentionally skip the packet number. However, the problem of that approach is that it relies on the receiver not ignoring reorders. When an ACKFREQUENCY frame previously sent had set the bit, then it become impossible for a sender to induce an immediate ACK when sending a PTO packet. At the moment, the only solution is to bundle an ACK_FREQUENCY packet in each PTO packet, but that seems like a bit of unnecessary overhead to me. Should we have a dedicated frame for inducing immediate ACKs instead?\nThat's an option. Another option is to revisit the max_reordering idea discussed in . If the max reordering was 5, then you could skip 5 PNs. I do feel like it'd be nice to allow specifying an int for max reordering, since that matches up with adaptive packet threshold loss detection, but as I expressed in , I think getting the details right are deceptively difficult, so I'm torn.\nThis does seem like a real problem. I like the idea of a single \"ELICIT_ACK\" type frame. With one byte a sender could effectively mark its PTOs ensuring they get an ACK without relying on the trick. An explicit frame is also easier to implement for the sender than the packet number skipping.\nThere's quite a bit of discussion on this on the tcpm mailing list at the moment, ie: URL I'm increasingly thinking it's worth using an unused protected bit in the header for this. It'd end up competing with the 2-bit packet loss measurements, but given those are experimental and may never be widely deployed, I think it might be worth it. For example, I believe this could allow an implementation to not include maxackdelay in PTO if it set the \"ACK-pull\"(or whatever it's called) bit on the final packet. That already seems possible if more than the threshold number of ack-eliciting packets are outstanding.\nI've thought more about this, and I think most of the time a 1-byte frame would be acceptable, but it definitely adds some implementation complexity, so I'd be curious which design others prefer? If we can decide, then I'll be happy to write a PR(or NAME can). Besides PTO, a use case would be making the last packet before becoming app-limited elicit an immediate ACK. This could allow skipping maxackdelay for all PTOs(see ), though always doing it could be a bit much in some cases. For example, browsers commonly send a series of requests, but to the transport it seems app-limited between them, because it doesn't know the browser has another request to send. It might be useful for 'chirping' as well, but I've never tried that, so it's mostly speculation.\nWe don't currently have any intention of deploying them either, and I think that's unlikely to change. I haven't seen a lot of good evidence that they are terribly helpful, and content providers don't seem to have a good incentive for implementing it. For us at least, the difference in complexity for a frame versus a bit in the header is negligible. A frame is actually be slightly easier to implement and losing a byte here and there isn't so bad.\nI strongly prefer using the header bit. Although it will be a bit more work to implement, the fact that I won't end up with a 1-byte STREAM frame that I have to split off at the end of the packet will be worth the effort.\nIs testimony of network operators not taken as evidence? I remember a line of people, from different companies -- BT, ATT, and so on, at IETF 101 practically begging for some information in QUIC packets to be exposed for monitoring and troubleshooting.\nI do not see anyone propose a 1-byte STREAM frame. I believe what is meant is a new frame type. If anything, using a new frame type to elicit an ACK is easier than using a header bit, as it involves fewer parts of the code to change.\nNobody is proposing a 1-byte STREAM frame (you can't just make up stream data anyway). The problem is that a new frame would consume bytes (or probably one byte) in the packet payload. Given that you set this signal on a PTO, and PTOs are often retransmissions of packets, this would require you to repackage (and, most likely, cut off a single byte from a STREAM frame). Using a header bit would solve that problem.\nNAME and I discussed this and there are number of ways to fix this, but given this issue of not being able to elicit an immediate ACK by skipping packet numbers is introduced by the draft, we feel it's important it be fixed in some way. One more option is new STREAM frame codepoints that elicit an immediate ACK.\nI'm ready to write a PR, but I'd appreciate thoughts on whether the existing PING frame should be repurposed or we should add a new frame(ACK, PONG, etc). I lean towards a new frame, since there are enough code-points and I believe there are use cases for the existing PING frame that does not elicit an immediate ACK.\n+1. While I would not be opposed to repurposing the existing PING frame, I do not see the necessity of doing that. Let's simply consume one code point of the remaining 33.\nThanks, now for bikeshedding time. Some possible names: ACKME ACKPULL (from TCP) FAST_ACK PONG\nPutting on my bikeshedding hat: I don't think we should prefix the frame name with ACK since that almost makes it looks like yet another variant of the ACK frame. The \"pull\" TCP terminology is fine (e.g. PULLACK) but we could also be more explicit with something like SOLICIT_ACK.\nSCTP has something similar and calls it SACK-IMMEDIATELY (RFC 7053).\nThanks, I'd be happy with ACKIMMEDIATELY or IMMEDIATEACK\nFor those two options I'd prefer IMMEDIATE_ACK for the same reasoning as above -- so it appears less like an ACK variant.\nIMMEDIATEACK sounds good to me. Alternatively, I'd be also fine with something like ACKNOW.\nIMMEDIATE_ACK it is, PR coming."} {"_id":"q-en-ack-frequency-05cec385e5b75d6ef4e0d46fe169b19e0392baa8bc2bba3e9e5d405589509290","text":"You aren't using URL, so remove it. I updated the links so that the text rendering doesn't include them twice."} {"_id":"q-en-ack-frequency-eb778e13102852058b779cc406a3abbc5aef553309c1bd9179533fec48361b4d","text":"NAME tell me if this is sufficient, or if you wanted more references to RFC9000.\nThanks, I'm open to other suggestions if you have them, but nothing else stuck out at me. I'm reluctant to repeat the full definition of out of order in this document, given it's well defined in RFC9000.\nWhen reviewing the draft I got caught on this part: As specified in Section 13.2.1 of [QUIC-TRANSPORT], endpoints are expected to send an acknowledgement immediately on receiving a reordered ack-eliciting packet. This extension modifies this behavior. There is an issue here in how this draft uses \"reordering\". Section 13.2.1 of RFC 9000 does say that ACKs should be sent immediately if: \"In order to assist loss detection at the sender, an endpoint SHOULD generate and send an ACK frame without delay when it receives an ack-eliciting packet either: when the received packet has a packet number less than another ack-eliciting packet that has been received, orwhen the packet has a packet number larger than the highest-numbered ack-eliciting packet that has been received and there are missing packets between that packet and this packet.\" Although the first bullet clearly is reordering, the second could be caused by three different mechanisms: Reordering of packets, loss of packets, or sender intentional gap (as discussed in ) Thus, I think this section needs to be clear if actually means both of the above bullets for when to send ACK or only one of them?\nThanks Magnus, the intent was to follow the RFC9000 definition, so I'll write a PR to clarify that was the intent.\nA couple of minor edits, but lgtmOk, I think this is truly the bare minimal to resolve my issue. There exist a clearer chain back towards Section 13.2.1 in RFC9000 and I guess that is sufficient."} {"_id":"q-en-acme-a76774ec5cc1f0890f23630c6570e7123c765922dfc54ddffaa99336df19da20","text":"There has been no implementer interest in the proof-of-possession challenge, and it makes the document more complex by having a completely different structure than the other challenges. If it is decided later that such a challenge is necessary, it can be added in a follow-on extension specificiation.\nGenerally this seems fine, and I wholeheartedly agree with the purpose of the removal (for now). One problem I can't open a comment for: Line 2099, the last paragraph before \"Preventing Authorization Hijacking\", still refers to the PoP Challenge. Without the PoP Challenge wording, a new reader may take that this to be referring to the implicit Proof of Possession of the ACME Account Key and thus be confused. Since it's non-normative, perhaps you could simply modify that paragraph into something more like:"} {"_id":"q-en-acme-b6656bcdb106d5e264ebc8085cd75ddf064b6bbd6f265f9bbe8c8426a96c5f95","text":"NAME does this look better?\nLGTM, though I'd change the attacker from a \"he\" to a \"they\" on general principles.\nNot a huge fan of the singular \"they\", but can't be bothered to rewrite around it :)\nYes, this is clearer. Thanks. Saying \"will cause the server to query an arbitrary URI\" is good. Technically you can even specify a non-HTTP protocol in an HTTP redirect, and this language should cover that too. (Go's net/http used by Boulder only does HTTP and HTTPS I think, but things like libcurl support a slightly insane number of protocols.) I'm not sure if the usages of \"URI\" vs \"URL\" in this section are correct, you may want to make them consistent. I never know which one to actually use. This looks good to me."} {"_id":"q-en-acme-f0e944758afd6239cf36636e5054d798fcbbda7fa3a6029234c45f5e649d02af","text":"We aren't using thumbprints for key-change anymore, so this paragraph doesn't make sense."} {"_id":"q-en-acme-2442469ce4b098457b260297c0ec033feab7bc9b55cef909366602964d88393d","text":"This was part of URL (see removal of oldKey from example payload), but I missed these two other spots. oldKey is fully specified by the account URL in the payload."} {"_id":"q-en-acme-8454e93f92a2d4c27bb82e408a5005e232747aff78119088c9be6dba284d2ed1","text":"This is a variant of that uses a MAC key to perform the account binding instead of a bearer token. Thanks to NAME for pointing out the need for this on the ACME mailing list.\nQuestion: will you require a CA that doesn't use external account bindings to reject a request if it contains an external-account-binding member? nit: s/field/member/g, consistent with RFC 7159.\nStartCom plan to use ACME protocol for StartEncrypt, we need to identify the client's validation level, so the subscriber administration can generate a special token in the URL account that send this token to the email address used in the registration. At the registration, user need to enter email and this token with the certificate to let the CA system know this customer's validation level. After the CA system receive the email, the token and signing certificate, CA system know what type of certificate we can issue to this client; if this client account is class 4 validated, then the client can get EV SSL certificate, not DV SSL. please add this a parameter to the ACME protocol, thanks.\nThis request should be sent to the ACME mailing list (EMAIL) for further discussion, it would also be very useful to know what your EV (or OV) validation flow would look like using ACME as most likely the current certificate application object would need to be changed to identify those requirements to the server.\nThis was fixed by\nMinor editorial comments."} {"_id":"q-en-acme-8db756a6603b4402ceac8cc34760d130d5506ebf2a6f2f7ff418a3636d5779d4","text":"Simplify the TLS SNI challenge by removing the iterations concept, which I believe is not necessary in practice and complicates the protocol needlessly.\nImplementing TLS SNI is currently really complicated, yes.\nFYI, URL implements protocol according to the contents of this PR - i.e. without iterations.\n(so does current Boulder)\nI think this will be OK to land once lands. We should consider whether we want to bump the version number in the challenge. Principle would dictate that we should, since this breaks compatibility with what was published in -01. But since I'm not aware of any implementation of \"tls-sni-01\" as specified in -01 (vs. the off-spec implementation in boulder), I'm inclined to just leave it.\nFWIW on the client side of things, xenolf/lego did not implement the iterations for the tls-sni-01 challenge.\nURL implements iterations but takes the absence of an \"n\" parameter to be equivalent to an \"n\" parameter of 1, which is compatible.\nGreat, I'm taking this as sufficient evidence that a version bump is not needed :)"} {"_id":"q-en-acme-04e027f76281c5469b3b37d503e64cb5558ad1d086f57e169a28ac8694f9c16d","text":"This PR fixes a typo reported by NAME in the \"Fields in Order Objects\" section. Previously it referred to the listed fields as relating to the \"new-account request\". This PR changes the text to make it clear they relate to a \"new-order request\". Resolves URL\nMerging based on manual review; CircleCI seems to be busted at the moment.\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks!"} {"_id":"q-en-acme-58881b3299e3c24dc7dbb5eefbd7b7a649c84091b7413b5ea404664d78bb2bd6","text":"Public key pinning isn't implemented in most HTTPS libraries outside of browsers, so this is a considerable burden on implementers. Public key pinning carries a fairly high risk of footgunning. The consequence of a failed pin for a CA that serves many ACME clients would be that some of those clients would fail to renew their certs, causing cascading breakage. There is relatively little confidential information conveyed in ACME, and there are other defenses built into ACME (like including the account key as part of the challenge data), so HPKP is not strongly necessary."} {"_id":"q-en-acme-79e298d6ffca6589214af0bd1246719b0b75624a8dfeb1483f2d56867acb4d7e","text":"Instead of just when not using DNSSEC since you don't know if a domain is secured by DNSSEC until the query is finished. Also rename the section 'DNS Security' since it also talks about DNSSEC. Also combines the 'DNS Security' and 'Use of DNSSEC resolvers sections."} {"_id":"q-en-acme-051e3b280b3e3947eb6773938267306261abb99365cd72607252f4d952ebd271","text":"Addresses Martin Stiemerling's TSV-ART review\nMerging based on NAME review\nOn the mailing list, Rifaat Shekh-Yusuf pointed out these issues: Section 7.3.5, first paragraph, second line: A \"bind\" word is missing between the words \"to\" and \"an\" Section 7.4.1, second paragraph, second sentence: \"case\" should be \"cases\". When the server builds the authorization object, the document is stating that the response would include \"challenges\" and \"combinations\". Remove the \"combinations\" as it is not being used. Section 7.5.1, the Request URIs in the examples: Should not this be /acme/authz/1234/0?\nIn \"Responding to challenges,\" we use the term \"response fields.\" These fields are defined by each challenge type, but are not described as \"response fields\" there. We should add a heading describing them as such in the challenge type definitions.\nThe other challenges don't expect you to post a payload containing a type field, so it's inconsistent for OOB-01 to do so. It should just take an empty object \"{}\" as its payload.\nThis seems fine. It seems to be protected from replay by the and parameters. I think we're OK leaving the challenge name at \"oob-01\", since this isn't a breaking change."} {"_id":"q-en-acme-1325dd2b1aefaec24ac83f8a4c781ed21de3111a86e363ccff69b34779a52e42","text":"As discussed at the F2F meeting at IETF94, MAC-based account recovery is not adding a lot of value, and it adds significant complexity to the spec. (And nobody has implemented it.) This PR removes MAC-based recovery and the crypto that it depends on.\nSeeing no complaints on the mailing list, merging."} {"_id":"q-en-acme-b6ac1936e8081ee2b67d8bd49cce2465375557b667240763704afff3d241f21d","text":"NAME This definitely needs to be reviewed.\nThis looks good to me.\nI like the idea of a special compound error type to indicate that the subproblems field is used.\nIn s7.6, it seems like we should indicate what error is returned: If the revocation fails, the server returns an error. The draft this nicely in many others places in the draft.\nI mentioned this earlier on the mailing list, but it did not receive any feedback: as a ACME client developer, I would like to be able to distinguish between errors which do not require user interaction (that would be \"certificate is already revoked, so can't revoke it again\"), and ones where the user must be warned that the operation did not succeed (\"certificate could not be revoked\", \"bad credentials\", etc.). This is currently only possible with server-specific ugly hacks (for example, returns this information, but other servers might return it differently or this might change in later versions of Boulder as it is not specified).\nThis is a good idea, sorry for missing it earlier. NAME - Do you have any thoughts on other cases besides \"already revoked\" that would (a) not require user interaction and (b) not be covered by other current error codes? NAME - Would you mind making a PR to add an error code (and anything else that NAME comes up with), and note it in the revocation section?\nI thought of one to consider in :\nIt looks like Boulder uses 404 and a generic malformed problem type for this case: URL URL\nNAME So far I couldn't think of anything else. I'm totally happy with :-)\nFixed by\n:+1: Thanks!"} {"_id":"q-en-acme-4035c793ae2283d0753719ef87d6e9ef74e10581d3a78e6117e145dfff4fdd22","text":"URL URL\nThanks NAME looks fine, just adding clarifications. Benjamin Kaduk has entered the following ballot position for draft-ietf-acme-acme-14: Discuss When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL DISCUSS: This is a great thing to have, and I intend to eventually ballot Yes, but I do have some questions that may require further discussion before this document is approved. It looks like the server returns an unauthenticated \"badSignatureAlgorithm\" error when the client sends a JWS using an unsupported signature algorithm (Section 6.2). What prevents an active attacker from performing a downgrade attack on the signature algorithm used? Similarly, since we include in the threat model a potentially hostile CDN/MitM between the ACME client and ACME server, can that attacker strip a success response and replace it with a badNonce error, causing the client to retry (and thus duplicate the request processing on the server)? I am not an ART AD, but there is not yet an internationalization directorate, and seeing statements like \"inputs for digest computations MUST be encoded using the UTF-8 character set\" (Section 5) without additional discussion of normalization and/or what the canonical form for the digest input is makes me nervous. Has sufficient internationalization review been performed to ensure that there are no latent issues in this space? Section 6.1 has text discussing TLS 1.3's 0-RTT mode. If this text is intended to be a profile that defines/allows the use of TLS 1.3 0-RTT data for the ACME protocol, I think you need to be more specific and say something like \"MAY allow clients to send early data (0-RTT); there are no ACME-specific restrictions on which types of requests are permitted in 0-RTT\", since the runtime configuration is just 0-RTT yes/no, and the protocol spec is in charge of saying which PDUs are allowed or not. Section 6.2 notes that servers MUST NOT respond to GET requests for sensitvie resources. Why are account resources the only sensitive ones? Are authorizations not sensitive? Or are those considered to fall under the umbrella of \"account resources\" (Section 7.1 seems pretty clear that they do not)? Section 7.1.1 discusses how the server can include a caaIdentities element in its directory metadata; does this (or anything else) need to be integrity protected by anything stronger than the Web PKI cert authenticating the HTTPS connection? It seems that a bogus caaIdentities value could lead to an unfortunate DoS in some cases. I am also a bit uncertain if the document is internally consistent about whether one challenge verification suffices or there can be cases when multiple challenge verifications are required for a successful authorization. I attmpted to note relevant snippets of the text in my comments on Section 7.1.4. I also have some important substantive comments in the section-by-section COMMENTS, since they would not in and of themselves block publication. COMMENT: This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): 2 K60BWPrMQG9SDxBDSxtSw 3 IXVHDyxIRGcTE0VSblhPzw 4 JHb54aTKTXBWQOzGYkt9A Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Section 1 Different types of certificates reflect different kinds of CA verification of information about the certificate subject. \"Domain Validation\" (DV) certificates are by far the most common type. The only validation the CA is required to perform in the DV issuance process is to verify that the requester has effective control of the domain. Can we get an (informative) ref for the \"required\"/requirements? Section 6.1 W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. Section 6.2 IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? Section 6.3 For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? Section 6.4 In order to protect ACME resources from any possible replay attacks, ACME requests have a mandatory anti-replay mechanism. We don't seem to actually define what an \"ACME request\" is that I can see. >From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) Section 6.6 IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. [...] Servers MUST NOT use the ACME URN namespace Section 9.6 for errors other than the standard types. \"standard\" as determined by inclusion in this document, or in the IANA registry? Section 7.1 The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Section 7.1.1 [...] It is a JSON object, whose field names are drawn from the following table and whose values are the corresponding URLs. Er, from the IANA registry, right? The following metadata items are defined, all of which are OPTIONAL: Maybe also refer to the registry? Section 7.1.2 IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) Section 7.1.2.1 IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? Section 7.1.4 challenges (required, array of objects): For pending authorizations, the challenges that the client can fulfill in order to prove possession of the identifier. For final authorizations (in the \"valid\" or \"invalid\" state), the challenges that were used. Each array entry is an object with parameters required to validate the challenge. A client should attempt to fulfill one of these challenges, and a server should consider any one of the challenges sufficient to make the authorization valid. This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. wildcard (optional, boolean): For authorizations created as a result of a newOrder request containing a DNS identifier with a value that contained a wildcard prefix this field MUST be present, and true. Is there a difference between false and absent? Section 7.3 A client creates a new account with the server by sending a POST request to the server's new-account URL. The body of the request is a stub account object optionally containing the \"contact\" and \"termsOfServiceAgreed\" fields. Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) Section 7.3.2 It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. Section 7.3.5 The server MAY require a value for the \"externalAccountBinding\" field to be present in \"newAccount\" requests. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? To enable ACME account binding, the CA operating the ACME server needs to provide the ACME client with a MAC key and a key identifier, using some mechanism outside of ACME. This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. [...] The payload of this JWS is the account key being registered, in JWK form. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. If a CA requires that new-account requests contain an \"externalAccountBinding\" field, then it MUST provide the value \"true\" in the \"externalAccountRequired\" subfield of the \"meta\" field in the directory object. If the CA receives a new-account request without [...] nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. Section 7.3.7 IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section Section 7.4 Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of URL that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Section 7.4.1 Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". Section 7.5 \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. Section 7.5.1 To do this, the client updates the authorization object received from the server by filling in any required information in the elements of the \"challenges\" dictionary. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. Section 8 IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? Section 8.1 [...] A key authorization is a string that expresses a domain holder's authorization for a specified key to satisfy a specified challenge, [...] I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. Section 8.3 I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory, [RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. Verify that the body of the response is well-formed key authorization. The server SHOULD ignore whitespace characters at the end of the body. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Section 8.4 Should this \"token\" description include the same text about entropy as for the HTTP challenge? Section 9.7.1 There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client are accepted when posted to the account URL, e.g., for account deactivation. Section 10.1 Can there be overlap between the \"validation server\" function and the \"ACME client\" function? Section 10.2 [...] The key authorization reflects the account public key, is provided to the server in the validation response over the validation channel and signed afterwards by the corresponding private key in the challenge response over the ACME channel. I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Section 10.4 Some server implementations include information from the validation server's response (in order to facilitate debugging). Such Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. Section 11.1 IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. Section 11.3 The http-01, and dns-01 validation methods mandate the usage of a nit: spurious comma. [...] Secondly, the entropy requirement prevents ACME clients from implementing a \"naive\" validation server that automatically replies to challenges without participating in the creation of the initial authorization request. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request. Thanks for addressing my Discuss and other points; as promised, I'm switching to Yes. (But I support Adam's discuss and look forward to seeing it come to a close as well.) This document was quite easy to read -- thank you for the clear prose and document structure! It did leave me with some questions as to whether there are missing clarifications, though, so there are a pile of notes in the section-by-section comments below. It seems natural to feel some unease when the concept of automated certificate issuance like this comes up. As far as I can tell, though, the only substantive difference between this flow and the flow it's replacing is that this one qualitatively feels like it weakens the \"know your customer\" aspect for the CA -- with current/legacy methods registering for an account can be slow and involves real-world information. Such can be spoofed/forged, of course, but ACME seems to be weakening some aspect by automating it. Given the spoofability, though, this weakening does not seem to be a particular concern with the document. I was going to suggest mentioning the potential for future work in doing time-delayed or periodic revalidation or other schemes to look at the stability of the way that identifiers/challenges were validated, but I see that discussion has already happened. It's probably worth going over the examples and checking whether nonce values are repeated in ways that are inconsistent with expected usage. For example, I see these three values appearing multiple times (but I did not cross-check if a nonce returned in the Replay-Nonce response header was then used in a JWS header attribute): Perhaps the examples could be offset by a description of what they are? (They would probably also benefit from a disclaimer that the whitespace in the JSON is for ease of reading only.) Section-by-section comments follow. Abstract (also Introduction) If I read \"authentication of domain names\" with no context, I would be more likely to think of the sort of challenge/authorization process that this document describes, than I would be to think of using an X.509 certificate to authenticate myself as being the owner of a given domain name. But it's unclear whether there's an alternative phrasing that would be better. Can we get an (informative) ref for the \"required\"/requirements? W3C.CR-cors-2013-0129 shows up as \"outdated\" when I follow the link. IMPORTANT: The JSON Web Signature and Encryption Algorithms registry does not appear to include an explicit indicator of whether an algorithm is MAC-based; do we need to include text on how to make such a determination? For servers following the \"SHOULD ... string equality check\" and for requests where the equality check fails, does that fall into the \"MUST reject the request as unauthorized\" bucket? We don't seem to actually define what an \"ACME request\" is that I can see. From context, this requirement only applies to JWS POST bodies, and not to, say, newNonce, but I wonder if some clarification would be useful. IMPORTANT: How tightly are these nonces scoped? Are they only good on a specific TLS connection? Bound to an account key pair? Globally valid? (This is not a DISCUSS point because AFAICT there is no loss in security if the nonce space is global to the server.) IMPORTANT: Providing an accountDoesNotExist error type probably means we need to give guidance that the server should choose account URLs in a non-guessable way, to avoid account enumeration attacks. \"standard\" as determined by inclusion in this document, or in the IANA registry? The \"up\" link relation for going from certificate to chain seems to only be needed for alternate content types that can only represent a single certificate. (Also, the \"alternate\" link relation is used to provide alternate certifciation chains.) Could this text be made more clear? Presumably this is just my confusion, but what does \"GET order certificate\" mean? Er, from the IANA registry, right? Maybe also refer to the registry? IMPORTANT: I'm unclear if the \"contact\" is supposed to be a \"talk to a human\" thing or not. If there are supposed to be different URLs that are used for different purposes, wouldn't more metadata be needed about them? So it seems most likely that this is indeed \"talk to a human\", in which case that might be worth mentioning. (Are these always going to be mailto:?) IMPORTANT: Am I reading this correctly that the GET to the orders URL does not require the client to be authenticated, in effect relying on security-through-obscurity (of the account URL) for indicating which account is trying to order certificates for which identifiers? This leaves me slightly confused. A final authorization can have multiple challenges. But a client should only attempt to fulfill one, and a server should treat any one as sufficient. So I can only get to the case with multiple challenges present in a final order of both \"should\"s are violated? Is there a way for the server to express that multiple challenges are going to be required? Hmm, but Section 7.1.6's flow chart for Authorization objects says that a single challenge's transition to valid also makes the authorization transition to valid, which would seem to close the window? Section 7.5.1 has inline text that implicitly assumes that only one challenge will be completed/validated. Is there a difference between false and absent? Given that we go on to describe those two and also the optional onlyReturnExisting and externalAccountBinding fields, does this list need expanding? IMPORTANT: How does the client know if a termsOfService in the directory is actually required or just optional? (There doesn't seem to be a dedicated error type for this?) The text as-is seems to only say that if the server requires it, the field must be present in the directory, but not the other way around. I guess Section 7.3.4 describes the procedure for a similar case; should the same thing happen for the original terms acceptance? IMPORTANT: The example response uses what appears to be a sequential counter for account ID in the returned account URL, which loses any sort of security-through-obscurity protections if those were desired. Should a more opaque URL component be present, maybe a UUID? (The \"orders\" field would need to be modified accordingly, of course, and this pattern appears in later examples, as well.) It's a little unclear to me whether the fields the client can put in the POST are the ones listed from Section 7.3 or 7.1.2, or the full set from the registry. But presumably the server must ignore the \"status\" field, too, or at least some values should be disallowed! The IANA registry's \"configurable\" column may not quite be the right thing for this usage, especially given how account deactivation works. All requests including queries for the current status and modification of existing accounts? Or just creation of new ones? This key needs to also be tied to the external account in question, right? One might even say that it is provided not to the ACME client, but to the external account holder, who is also running an ACME client. This is presumably my fault and not the document's, but I had to read this a few time to bind it as the ACME account key, and not the external MAC key. nit: maybe \"If such a CA\"? IMPORTANT: I don't think I understand why \"nonce\" MUST NOT be present in the external-binding JWS object, though I think I understand why one is not needed in order to bind the MAC to the current transaction. (That is, this is in effect a \"triply nested\" construct, where a standalone MAC that certifies an ACME account (public) key as being authorized by the external-account holder to act on behal of that external account. But this standalone MAC will only be accepted by the ACME server in the context of the outer JWS POST, that must be signed by the ACME account key, which is assumed to be kept secure by the ACME client, ensuring that both key-holding entities agree to the account linkage.) Proof of freshness of the commitment from the external account holder to authorize the ACME account key would only be needed if there was a scenario where the external account holder would revoke that association, which does not seem to be a workflow supported by this document. Any need to effectuate such a revocation seems like it would involve issuing a new MAC key for the external account (and invalidating the old one), and revoking/deactivating the ACME account, which is a somewhat heavy hammer but perhaps reasonable for such a scenario. Account key rollover just says that the nonce is NOT REQUIRED, and also uses some nicer (to me) language about \"outer JWS\" and \"inner JWS\". It might be nice to synchronize these two sections. IMPORTANT: The \"url\" in this example looks like an account URL, not an account-deactivation url. If they are one and the same, please include some body text to this effect as is done for account update in Section 7.3.2. Is Section 7.1.3 or the registry a better reference for the request payload's fields? Does the exact-match policy (e.g., on notBefore and notAfter) result in CA maximum lifetime policies needing to be hardcoded in client software (as opposed to being discoverable at runtime)? (I like the order url in the example, \"[...]/order/asdf\". Not much entropy though.) IMPORTANT: Why does the example response include an identifier of www.example.com that was not in the request? Is the \"order's requested identifiers appear in commonName or subjectAltName\" requirement an exclusive or? After a valid request to finalize has been issued, are \"pending\" or \"ready\" still valid statuses that could be returned for that order? Elsewhere when we list \"identifier (required, object)\" in a JWS payload we also inline the \"type\" and \"value\" breakdown of the object. How is \"expires\" set for this pre-authorization object? We probably need a reference for \"certificate chain as defined in TLS\". \"When a client receives an order from the server\" is a bit jarring without some additional context of \"in a reply to a new-order request\" or \"an order object\" or similar. \"challenges\" looks like an array of objects, not directly a dictionary with elements within it. IMPORTANT: What do I do if I get a challenge object that has status \"valid\" but also includes an \"error\" field? I'm going to quibble with the language here and say that the keyAuthorization string as defined does not express a specific authorization for a specific challenge, since there is no signature involved, and the JWK thumbprint is separable and can be attached to some other token. (This may just be an editorial matter with no broader impact, depending on how it's used.) One could perhaps argue that the mere existence of the token constitutes an authorization for a specified key to satisfy the challenge, since the token only gets generated upon receipt of such an authorized request. I'm not sure that 4086 is a great cite, here. For example, in RFC 8446 we say that \"TLS requires a [CSPRNG]. In most case, the operating system provides an appropriate facility [...] Should these prove unsatisfactory,[RFC4086] provides guidance on the generation of random values.\" On the other hand, citing 4086 like this is not wrong, so use your judgment. nit: \"a well-formed\" Can we get some justification for the \"SHOULD follow redirects\", given the security considerations surrounding doing so? Should this \"token\" description include the same text about entropy as for the HTTP challenge? There is perhaps some subtlety here, in that the \"configurable\" column applies only to the new-account request, but its description in the template does not reflect that restriction. In particular, \"status\" values from the client *are* accepted when posted to the account URL, e.g., for account deactivation. Can there be overlap between the \"validation server\" function and the \"ACME client\" function? I'm stumbling up around the comma trying to parse this sentence. (Maybe a serial comma or using \"and is signed\" would help?) IMPORTANT: Also, I don't see where the key authorization is signed in the challenge response -- the payload is just an empty object for both the HTTP and DNS challenges' responses. Some of this text sounds like we're implicitly placing requirements on all (HTTP|DNS) server operators (not just ones trying to use ACME) to mitgate the risks being described. In general this sort of behavior seems like an anti-design-pattern, though perhaps one could argue that the behaviors in question should be avoided in general, indepnedent of ACME. Disambiguating \"ACME server implementations\" may help, since we talk about other HTTP requests in the previous paragraph. IMPORTANT: This may be an appropriate place to recommend against reuse of account keys, whether after an account gets deactivated or by cycling through keys in a sequence of key-change operations (or otherwise). I think there are some attack scenarios possible wherein (inner) JWS objects could be replayed against a different account, if such key reuse occurs. nit: spurious comma. IMPORTANT: I'm not sure I see how this applies to the HTTP mechanism -- couldn't you write a script to reply to .well-known/acme-challenge/ with . for a fixed key thumbprint? The validation server would ned to know about the ACME account in question, but not about any individual authorization request."} {"_id":"q-en-acme-1765c416a346afce98b0955aa8cbaaac45a6ea5156a1f87d123b4a2122056962","text":"In addition to forbidding explanatory text, this PR restricts the PEM-certificate-chain structure to the \"strict\" production defined in RFC 7468, Figure 3.\ns/Figure 3 of RFC 7468/Section 3 of RFC 7468/"} {"_id":"q-en-acme-450fb8aa84a32be59953bcca021e70ba0ad6e3edda98c63b0de068766228d495","text":"Thanks to Felipe Gasper for .\nMerging despite spurious CI failure, since it seems pretty clear that the CI failure is spurious. Hi all, Draft 16 has this: An ACME client MAY attempt to fetch the certificate with a GET request. If the server does not allow GET requests for certificate resources, then it will return an error as described in Section 6.3 . On receiving such an error, the client SHOULD fall back to a POST-as- GET request. From what I can discern from this list’s most recent posts on the topic, I’m under the impression that plain GET for certificates is not to be part of the principal ACME spec; should the above, then, be removed? -FG"} {"_id":"q-en-acme-c87a480ce29fa5604bd3e11ed069456a1e426f755bc6c47e9b5ead5679f4214e","text":"NAME Sure. I've just posted this: URL\nLGTMThe bullet points improve readability quite a bit, thanks NAME !also not purely editorial, but okay. I realize it's very late for making non-editorial changes to draft-ietf-acme-acme, but I'd like to propose adding a new badPublicKey error. This error would be returned by the server whenever it does not support, or wishes to reject, a \"jwk\" public key supplied in a client's request. Proposed text: URL The 'array of supported \"alg\" values' in a badSignatureAlgorithm response is useful, but ISTM that it doesn't provide detailed enough information to assist a client in generating a suitable public key. (If the consensus is that it's too late to add a new error type, then my alternative proposal will be to use \"malformed\" instead of adding \"badPublicKey\", but keep the rest of PR 478 as is; I think it's a good idea to call out the need for a server to sanity check each client-supplied public key). Rob Stradling Senior Research & Development Scientist Sectigo Limited"} {"_id":"q-en-acme-b17794c36b0d6aa244e189a99db30250aaec098bb58532e1e7a065c62433d722","text":"Avoid changing the \"him\" in the MitM paragraph, because of the inherent notion of a Man in the Middle, which is a larger discussion.\nLGTM. Thanks for the elegant fix."} {"_id":"q-en-acme-4dc4fae5b2b73b8e5ce76e7f794e91bce2705175888b40411891dfdd89395d62","text":"For discussion.\nSee also: URL\nI proposed on the mailing list to remove implicit renewal some time ago, and it received some support. Creating this issue to track it. All discussion should be done on the mailing list."} {"_id":"q-en-acme-712c6ae80dba189e69afb486db9ddcb61a120e727068d30ace0c5ceb0e02c488","text":"The format should be specified more clearly, by reference to a couple of RFCs The \"peer\" provision seems kind of awkward. For server implementations where the validation is done by a separate entity than the ACME interactions (like ), this means that the challenge no longer has all the information that the validation agent needs to do the validation. I'm inclined to drop it and put the burden on the client."} {"_id":"q-en-anima-bootstrap-83fec0478b56ab73b87836a1f6f7c7bef6c8fad43a2a8446e728e5b41ec2c093","text":"(n.b., i was not able to \"make\" and test the change, due to external pyang dependencies of some form."} {"_id":"q-en-api-drafts-07a29a2ad6465dc1c331fe8f30a88df8d2d6316becaabf8f8af82ef614f5106f","text":"Does this make sense outside of a protocol-specific context? I am aware that this is a different can of worms adding protocol specific parameters to clone, but this looks like a strange concept to add an Integer here that may have different side effects in QUIC and SCTP…\nThat's a very good question. This feature request came up in the QuIC mapping part of the discussion in Vienna, see: URL Not sure what to do: keep it as it is, remove it, or add the protocol: assign: \"if protocol = x, then stream ID = y\" read: \"stream ID is y, and your protocol is x\" I have to say, I don't like this particular way, as it would complicate matters quite a bit. But how else to do it?\nAll of the multistreaming protocols we know about have an integer stream identifier. I suppose a future protocol could use some sort of binary blob for identifying streams (which are... pedantically... representable as arbitrary-length integers)\nInterim discussion: let's make Clone() and Initiate() take a group of protocol-specific parameters (optionally) which allow protocols like QUIC and SCTP to extend this.\nI just made a very minimalist update, where \"protocolSpecificProperties\" is only a parameter of Clone(). Why not list it as readable properties, and why not as a parameter of Initiate, as we discussed? What's the point of saying \"there can be protocol-specific properties which are currently not defined, and they can be readable\"? This is for a future spec to say, when it proposes a new transport property. Regarding Initiate(), I think this should in fact not be a parameter to the Initiate Action itself, but a transport property to be supplied when creating a Preconnection. Again, this made me find that I would only write something like \"There can also be transport properties that are not yet defined\", which doesn't seem to be useful text. In conclusion, I found that only the Clone() Action needs this as a new parameter. Everything else is for future mapping documents. Thoughts?\nMany thanks! I incorporated your suggestion, and made a tiny edit to it (behaviour => behavior, underlaying => underlying, and \"stream or connection\" instead of \"stream/connection\").\n... combined with \"if the transport is mux'ing, else ...\"\nThanks – I guess we have a fairly good solution."} {"_id":"q-en-api-drafts-f944d21e4cb1b9800a1bf71b64dc9591ea2e4c9ed425caecc64a9289ddfe9bf7","text":"Initially I thought the description for UDP-Lite was incomplete, but mapping back to the details of the API draft I realized it was incorrect. UDP-Lite supports the msgChecksumLen and recvChecksumLen properties. The API for theses primitives is a bit messy. As specified I think it means the following: UDP can use them to set full coverage or no checksum using the special value of 0. UDP-Lite can use them to set full coverage or a partial coverage. Setting the checksum to only cover the header, although supported by UDP-Lite, will not be possible as 0 is a special value. The special value of 0 can not be used by UDP-Lite, but I left this detail out of the pull request as I thought it got very detailed. It can be added if you think it is needed."} {"_id":"q-en-api-drafts-7ef246f06216bc339222fba7f6580842960885c33305acfeb165dcab08d30f3e","text":"What makes these errors \"special\"? It seems a strange term to use, and the meaning is unclear.\nI didn't write this, but I think it was an effort to lump together, from minset, two quite different types of errors: 1) notification of excessive retransmissions; 2) notification of icmp error message arrival. Any suggestions for a common term?\nIf it's just those two types of errors, we can probably just list them...\nURL is what I did in minset :)\nso the suggestion is to split these out into two protocol selection properties?\nyes, I think so\nall right then, see\nlgtm"} {"_id":"q-en-api-drafts-d1664269d48618fab54cc97c026535842c40acf02f7aeb26f7a062f4db701dc4","text":"This is a reworking of the text from Section 4.2 of draft-trammell-taps-post-sockets-03, intended to help motivate the choice of Message-based APIs with framing for the transport services architecture. The level of detail here seems appropriate for the architecture document, rather than the API.\nWell written, NAME\nWhat's convenient about the framer? If I have messages of 100 bytes and the length is what defines my message, what's the point of requiring me to define a framer function that I give to the system? I just send, send, send, off go 3 100-byte messages. Hard to get more convenient than that :) I understand that the hook to a sender-side and receiver-side framer can allow you to offer a library that does framing when needed; I don't personally see this as a big win, but okay... and I understand that forcing apps to use a receiver side de-framer can, in principle, let us do cool things on the receiver side (accessing messages out of the bytestream). But for sending this just seems to be an uncomfortable add-on. I guess I shouldn't be arguing it as somehow it's again just syntactical sugar, and I said before that I don't care too much about syntax here. I get it, for the 100-byte messages the framer just won't do anything, and for sending I have to use Connection.Send(Framer.Frame(Message)) instead of Connection.Send(Message)... these calls could basically be renamed into SendMessage vs. SendSomethingHuge. I said before that I don't care too much about these syntactical bits. I guess it's just that tying \"Connection.Send(Message)\" to \"we assume your Message is big and terminates a TCP connection\" is really counterintuitive to me.\n... and I think I answered to NAME in the wrong place :( sorry github\nNAME what’s convenient about the framer? In a C based implementation, very little. In a high level language implementation, however, it means I can and have it convert the high-level structured data into the on the wire format automatically, using a reusable library.\nno, no, that was right. Framers can take an object by reference or value of any type, and turn them into an appropriately framed octet array. Syntactic sugar, yes, and pointless in some languages. But have a look at URL -- here, you're pushing down the thing that converts application structures into wire structures into the API, so the API speaks to you in terms of your own structures (in Go, this is generally called a ; IIRC from Java is the terminology there). If your application-internal structures are already serialized byte arrays, you're right, there is zero win here.\nNAME Framers are still very very useful in C. You can use callbacks, function pointers, etc, to help lay our your data, and doing that in the context of the networking thread improves efficiency (beyond the simpler API).\nNAME even better, then!\nalso: so this whole line of argument is extremely perplexing to me, and indicative of some sort of fundamental failure to communicate on this particular point. absolutely nobody is saying that this interface implies \"your Message... terminates a TCP connection\" on the sender side. I can't see where you're getting the impression otherwise in the text. (When receiving, yes, if the receiver has no way to deframe the stream, you get a message that never ends, as a series of partial reads that looks just like a bunch of calls on a stream; i.e. when it fails, it fails to something that basically works exactly like sockets with callbacks.) I really suggest we table this whole discussion until London, because we don't seem to be making much progress in here, and ask you to trust me until then when I say that there is never, ever, ever a situation in which an implementation of this interface that was actually working would ever look at a single message on the sender side \"oh, cool, that's done, guess I'll send a FIN then!\", ever, because that would be an incredibly silly thing to do. (where's the blink tag in github markdown? i really need the blink tag for this)\nOn this: Sorry, there's a lack of context - I interpreted this out of private emails from Tommy, which seemed to pretty clearly say that, if you just call \"send\" without providing a framer, you do send a FIN - and I just found that awkward. But heck! Clearly we all don't want silliness to happen and we want a nice clean interface, and clearly we can quickly agree on this in London. So no worries, and see y'all soon!"} {"_id":"q-en-api-drafts-8c737d136b0ccc650027f7574ed630121b72a4c9bd1903b5ed3331bf66cecea2","text":"I didn't like something about reliability (let's not throw an error when we can't be UNreliable, only when we can't be reliable). I updated the text on ordering a bit, to say that the Property is ignored for the first Message on a Connection, and rephrased it slightly to talk about the receiver-side transport system and the receiving application - before, the text might have talked about a sender-side behavior. I added a TODO DISCUSS comment about per-message Niceness.\nFine with me, maybe fix minor knitThanks, all good now I think."} {"_id":"q-en-api-drafts-6309dae426c347cee6995e94fd6829f86118b2c0ca55ce0320ec95c38b7f9ee1","text":"I guess here is not nessecarily a notification required in sychronuous communication\nMerged in master to avoid conflict, and tweaked working to remove \"some times\", which felt a bit too colloquial in tone."} {"_id":"q-en-api-drafts-7cb45e3de603ecd6010ecdbc5e9b3594228234b0fd3b31bfb6f10696a34dda8a","text":"Notable changes: added control over using multipath (which was a late addition to minset) added a timeout for conn. establishment, separate from the other conn. abort timeout (see ) renamed InitiateWithIdempotentSend into InitiateWithSend, and adjusted the text to recommend declaring idempotence. This addresses the minset function that goes back to TCP's send-with-SYN.\nStatus: NAME said \"LGTM modulo tiny nits\" and I think these nits are now addressed. NAME requests something more than just a boolean to enable/disable usage of multiple paths. It seems to me that this needs more discussion, but it also doesn't seem like a show-stopper for the boolean to begin with; I think we can discuss this more after landing this PR. NAME hasn't reacted (or am I missing this), but this has an ok from NAME and NAME In conclusion: I think this is good to go. I'll wait until tomorrow; unless someone speaks up by then, I'll merge this PR.\nIf you make it a boolean it cannot be used as a selection property, so I do not think that is a good choice. Or how should it be interpreted for the selection?\nNAME what do you mean, that it should be a (binary - yes/no) Preference instead? I thought this would be more limiting but I'm fine with this as well.\nNAME I meant that it should be a Preference as pointed out in the original comment by NAME I do not understand what a boolean means during selection (is it a requirement or a preference?) I also do not understand what it means in the middle of a connection. You mean you should try and use a single or several subflows for scheduling data, assuming you already have a multipath connection? I guess this is possible, but then the description needs to be more clear that this is what it means. In the appendix about minset your have \"Disable MPTCP: \"Parallel Use of Multiple Paths\" Property.\" I assume this refers to the Protocol Selection phase? It does not make sense to disable or enable MPTCP in the middle of a connection I think. I guess part of how to specify the property depends on how to interpret this part from the intro to the Managing Connections section: \"The application can set and query Connection Properties on a per-Connection basis. Connection Properties that are not read-only can be set during pre-establishment (see {{selection-props}}), as well as on connections directly using the SetProperty action\" I interpreted this as properties set during pre-establishment are selection properties, but maybe this is not the intention as they may need to be expressed differently? If they are not intended as selection properties, we are missing a selection property for multipath I guess.\nNAME Ok. That's what I meant, yes - but I agree that's not covered by the description, it really is a selection preference only as it's written now. OK (though you CAN do it with the \"real\" API, by closing subflows, but this goes beyond the spec'd protocol, and I'm also not sure it's a good idea, especially when limited to only enabling / disabling it altogether). I would have understood this as in your first interpretation: \"properties set during pre-establishment are selection properties\".\nNAME Closing a subflow is not the same as disabling MPTCP to me. You will still be using MPTCP, only with a single subflow. And you can certainly not open a subflow unless MPTCP is already enabled. But seems we now have a common understanding.\nNAME ack, we've converged. I just pushed a commit that turns the \"Parallel Use of Multiple Paths\" property into a Selection property.\nthese are a few minor changes to apply, and properties to add (e.g. \"disable MPTCP\")\nthis is done, yes?\nNo. But I'm planning to do a PR to address this issue this week.\nNote that this also requires closing , which I re-opened with a concrete proposal to address something from minset.\nLGTM modulo tiny nitsSorry, missed this until just now. Looks fine with me, I just have a small question and my 2 cents to the multipath discussion (which we can have at a later point)."} {"_id":"q-en-api-drafts-9a697faff3c2d3c83ccf4472228c7966c3bb9c825737b10667b5a6fd65ea9952","text":"This closes issue\nWeren't we supposed to stop making PRs and issues until March 5? See Zahed's email to the TAPS list\nNAME sorry – I did not read list mails since yesterday morning.\nYes, let's hold off on these until we move repos!\nWe discussed the use of a \"Default\" preference level that restores the system default in . In addition, we should check whether we really need to fix the default values for all Selection Properties in the API document or can make some of them to be \"Implementation Specific\".\nClosed by PR\nLooks good to me."} {"_id":"q-en-api-drafts-17379de1d39ad7e7f0bf4014940c8e3c7bce399b0f986971fed0010b73ea0df8","text":"This adds an explicit Listener object to the API ( to clarify behaviour of multi-streaming protocols ( clarify behaviour and enable re-use of Preconnections (\nWe talk about listen objects in architecture and implementation, but not in the API. Changes needed: Let return a Listener Move the the Listener Allow Preconnection to be used multiple times.\nThis issue popped up while discussing and seem to be a wired QUIC multi-streaming issue, possible affecting SCTP too): When a multi-streaming protocol like QUIC allows both ends to open new streams (e.g. by cloning the QUIC connection representing the stream), the both side needs a way to get a ConnectionReceived event for a new stream. That might result in the following weird pattern: I think this needs to be documented somehow and we may need to add a Selection Property do turn off this behaviour.\nCalling is an active open. Once the connection is open, the peer can open new streams, but those would surely be events on the not on the .\nNAME I am not sure with this one: assuming you have a bunch of ? The first one? What happens if I close the first one?\nThat's a good point, and I'm not sure I know the answer, but I don't think the should fire the events. A is a potential connection, yet to be instantiated, but these connections come from an already active connection. I've never been entirely comfortable with cloning a connection as the multi-streaming abstraction. This is perhaps the first scenario where it struggles though.\nI do not understand the pattern in the example. You do not get a ConnectionReceived after an active open, you need a Listen?\nNAME Your assumption is true for TCP, but not for multi-streaming transports like QUIC. With QUIC, you can indeed get additional TAPS connections (QUIC streams) based on an existing, initiated connections. The question is how to handle this. NAME A possible way out would be making itself is returned by an event then.\nNAME the TAPS interface is the same weather you have TCP or QUIC underneath. For TAPS an active open generates a READY event not a ConnectionReceived? I do not think your example is a valid TAPS sequence.\nNAME the Ready event is not shown in the example (it would be within the first ). The problem I raised is how to model the addition of streams to a multi-streaming connection. In case of QUIC, additional streams can be initiated by the server/listening side of the QUIC connection. If we model QUIC streams as TAPS connections, this results in events at the client (and server) side for each new QUIC stream.\nNAME agree we could add , although it's a significant shift in the model. If we did that, we might also want to revisit as the way of creating a new stream on a connection, in favour of a method on . NAME did you implement this with QUIC? How does your implementation work?\nI also see a problem with the communication pattern here. To me, modelling a stream as a TAPS connection means that for every new connection that a peer creates (be it a stream of an existing connection or not), I'll have to listen. So: if I expect 5 incoming connections, which may e.g. be 1 connection with 1 stream and 4 later-added streams below the API, I'll have to listen 5 times. What this also means is that the grouping notion should be a readable property - I believe that we currently only describe it as something that's actively created via clone(). Regarding code, no idea how NAME has done this, but I believe that the NEAT implementation is like I describe here. For more details on this specific aspect, see: URL\nNAME We create a Listener from a Connection to handle inbound streams associated with the same parent transport connection. NAME Currently, our Listener object doesn't require calling Listen() for each new connection—you can get the event invoked many times, so it doesn't require knowing how many streams may be opened by the peer. In order to get back pressure, we're talking about having a window for how many new inbound connections will be allowed at a time.\nOk, that also sounds perfectly reasonable. Actually it seems that this (receiving multiple ConnectionReceived events on one Preconnection) is already supported by our current text, sorry for missing this! So... what is the problem we're discussing here - what do we need to change in the text? I think we need only the following: do we have a use case for one Listener getting multiple Connections without having these Connections grouped? (I guess not? Grouping is not a promise of multi-streaming.) If so, we need the ability to query whether connections are grouped (currently grouping is only created actively via \"clone\"). Otherwise, we can just statically say that all Connections that are created from the same Preconnections are grouped. some kind of ConnectionReceived event for the Rendezvous case, with text saying that all Connections created off the same PreConnection are grouped. (right?) a way to provide back pressure. Here, we already have PreConnection.Stop(). In the interest of keeping things simple, do we think that intelligently switching between a listening and a non-listening mode would be enough? If so, maybe we should have a call similar to Stop which stops listening, but does not dispose of the PreConnection, such that Listen could be called again. Maybe have an \"End\" call which stops and disposes, and let Stop only stop listening (because \"Stop\", to me, indicates only stopping but nothing else).\nNAME This solution sounds very reasonable – I guess we should do it the same way for TAPS and add that as Text to the API NAME Some comments: It depends – if writing a server, you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener. So the behaviour wether to group should be a property of the Listener. I would not group connections received through the same Rendezvous (analogous to server-style listener). We might want to add a property to the listener\nNAME I agree, but: just because groups may encompass more than the Connections from one Listener doesn't mean that it would be bad to always group the Connections from one Listener (i.e., group them among themselves, but maybe group them with others too). Besides, I think that wouldn't be the listener deciding anyway. Just to understand this, are you saying \"no, see my argument for case 1\" ? Then we're just discussing case 1, my answer is above. Otherwise I'd say: why not? yes ... but what do you mean with ? We have , just the semantics are not what I suggested. Regarding , I think that's a good idea - my own proposal was me overdoing \"keep it simple, let's not add features\". Nobody needs to use this unless they want to.\nNAME I guess the answer really depends on what a connection group is for you. I always saw connection groups as an abstraction for multi-streaming connections (or their emulation) – therefore I see no use for grouping connections from multiple listeners. My opinion is auto-grouping for multi-streaming is fine, otherwise it is not. really falls out of 1 I want your semantics, but call it\nNAME Me too. I can imagine other things ( this paper of ours comes to mind: URL ), but generally I agree and don't think we should complicate the API for this. But I'm getting confused: you wrote \"you might want all streams of a multi-streaming connection grouped, but not all multi-streaming connections received through the same listener\". That sounded to me like you want to support multiple streams, but from several listeners?! ACK Agreed!\nNAME let me elaborate on this… You might want all streams of a multi-streaming connection grouped, e.g., all streams within a QUIC connection You might not want all multi-streaming connections (e.g., QUIC connections) received through the same Listener object grouped You definitely do not want all all streams of all QUIC connection received through one listener grouped (except if you have hierarchical groups that allows you to sort that out the cases above)\nI agree with all this. Anyway I think it's a moot point: whether they're grouped or not is not up to the listener to decide? So grouping needs to be something that can be queried.\nOk – fair enough – but I still think the Listener is the right place to configure whether connections that come out of it should be grouped by default or not.\nNot unless we have a concrete idea on what to do with this information inside the transport system. => I think no text update is needed as a result of this issue and lean towards closing it...\nI don't think we really need a text update for the grouping, but for the original question. To address the handling of incoming streams in multi-streaming connections, I would like to have the solution NAME outlined reflected in the API.\nOne question on this, for NAME the description of your Listener object being able to accept multiple events, is that how it works in general or was that specif for a multi-streaming scenario? As specified I am not sure that our Listen action does not allow multiple ConnectionReceived events? The text says\nNAME The listener object for all cases (both multi streaming and receiving brand-new transport connections) allows multiple events. I think the existing text is a bit ambiguous, but it seems that \"listening continues\" implies that the event can certainly fire again.\nRevisit after is done\nRegarding this text: This seems to imply that the Preconnection object, which may be full of many carefully tuned settings that could be useful for another Connection, is no longer usable. While I understand that the specific Preconnection cannot change for a Connection, this is inconvenient to the application. I see two good options to solve this: Make a deep copy of the Preconnection upon Initiate(), and say that a Preconnection may be used or changed but changes can no longer influence the Connection (this is what Network.framework currently does) Allow a URL() method to let the applications do the deep copy themselves when and if they want to re-use.\n+1 to reusing the preconnection. Do you even need a deep copy. The connection is a new object that should be independent of the preconnection anyway, no? Another thought: Are there actually ways to get different connections out of the preconenction. E.g. you do racing and end up with two open connections and (suddenly) want to use both...?\nI'd like to see this as well. To me it makes most sense to allow calling Initiate() on the same Preconnection multiple times to get multiple Connections, but not allow to change this Preconnection anymore after the first Initiate. This way it would be similar to Listen() and Rendezvous(), where we have a Preconnection that yields multiple Connections, and that cannot be changed anymore after Listen() or Rendezvous(). So here we already have a binding between the Preconnection and the Connections that come out of it. If the application wants to change the Preconnection before re-using it, I'd be in favor of letting it explicitly clone the Preconnection - not sure if we actually want to call it clone() though, because we already have Connection.Clone(), which creates a Connection Group, a completely different thing.\nWhy do you think it should not be allowed that a preconnection can be changed after the first initiate? If you don't change it you should get the same kind of connection, if you change it you might get something different. Your choice!\nI wanted the API contract to be similar for different forms of Connection establishment. But I may be overthinking this.\nFirst, I like the idea of reusing Preconnections. I am not sure whether we should define now whether they should be changeable after the first initiation. There might be kinds of Preconnections where this is a good idea, there might be others where it is not. I anticipate one can realize connection pools as a sub-class of Preconnection. In this case, in such cases, it should not be possible to change a Preconnection.\nSo the difference is if you think you morph a preconnection into a connection, or if the preconnection create the connection. i think the first think does not make sense because their are two completely different things; they have not actions or events in common.\nNAME The morphing one has its beauty for modeling TCP fastopen / 0-RTT data.\nWhether it makes sense to re-use a perhaps depends on how fully specified it is? Reusing a completely specified , that will just re-open an identical 5-tuple as an existing might not make sense. Reusing a partially specified , e.g., with the source port left as a wildcard to new connections can open without conflicts, seems reasonable.\nNAME right, using the exact same set of fully-specified properties isn't going to work. But having a partially specified one and copying it, or modifying one that was used (by switching out the Remote, etc) would be useful.\nThis might work better the other way -- on , the Preconnection corresponding to the initiated/initiating connection gets deep-copied into the new connection, leaving the existing Preconnection in the state that it was. That deep copy, inside the connection, cannot be reused, but the existing one could be. (This probably only works for connections on which you . turns the preconnection into a listener from which multiple , each of which will have their own deep copy of the listener, will spawn. might be even weirder, but there's a potentially bigger win to allowing reuse, but I don't quite have my head around how it would work)\nSo i think there is actually not that much too deep copy because a preconception is not a super class of a connection. At the moment when you call initiate or rendezvous the preconception should create a connection and store all information regarding this connection in the connection object which is later returned. The only thing that needs \"copying\" is the properties but you need to create a new object anyway because preconnection properties have preferences and connection properties not but have to be marked as mutable or unmutable. These two things are also not the same. So the only that that's left to copy maybe are the endpoints. Not sure if the preconnection and connection both have a copy of the endpoints or if an endpoint only exists once and both have a reference (which means if the endpoint gets somehow updated that's relevant for both) or maybe even the connection does not need a reference to the endpoint anymore...?\nNAME right – but I don't know a way of specifying \"a is consumed if fully specified, else may be reusedin any current programming language. Maybe tracking lifetimes so closely doesn't matter though, and we can just write this in the draft.\nI have no problem with an Interface where some Preconnections can be reused and others can not. That makes a pretty nice case for -- preconnection reuses itself for multiple incoming connections. For ` I see this API-wise as some kind of listen called at both sides.\nSide meeting says we can reuse the initiate call. Implementation should specify that a deep copy is made on Initiate() or Listen()\nWill be trivial to fix after is done\nSorry to reopen an old issue, but when reading the current API draft, I noticed that it still says that reusing the Preconnection is possible for Listeners, but not possible for Initiate() and Rendezvous(). For example, see Section 6.1: \"The Initiate() Action consumes the Preconnection […]\" From what I read in this issue, I think Preconnections should be allowed to be reused for Initiate() and Rendezvous() as well (possibly with a note on deep-copy).\nThis looks just right to me, thanks a lot for doing this! In particular having read again, I believe that NAME should definitely have the last word on this one. So I'll go with whatever NAME says."} {"_id":"q-en-api-drafts-bcab32f91bfffa1ed513f3bc70d71a328ee2b7f5a1fcb97b32569280da02c000","text":"Starting to address This is an early review, to check if we like the structure. I'd also like help filling out the SCTP section!\nI can also help with SCTP, which I would do in the same style that I suggest above (for TCP, in my example of \"Abort\") - also later.\nWe've updated Implementation mapping details for most protocols with , but we didn't tackle SCTP yet.\nI'm fine with this being assigned to me. However, it's more than just writing the SCTP text. Here's the relevant part of what I wrote in PR : I don't suggest to delete much of the text that's there but definitely amend it with the RFC reference at least for what's covered in RFC 8303. There's a very direct line from this type of text in RFC 8303 to the original specs, so I do believe that should be helpful. That's (obviously) post-Montreal though!\nNAME please send a PR along these lines.\nLGTM modulo mwelzl changes.I think this is a good step forward. This said, I think we should describe things focusing on the transport's API call to be made (in line with the spec - that's why we have RFCs 8303 and 8304) rather than protocol internals. E.g., rather than saying that Abort on a TCP connection transmits a RST, I think we should simply say that Abort calls ABORT.TCP [RFC 8303, Section 4.1]. I can take care of these things, later. I'd still like to focus more on the API doc at this time."} {"_id":"q-en-api-drafts-d79b9627857bb59ea6b1f4d2aa1cfb77c8b71bc4e45c8f70df485acc8b14e04e","text":"Maybe add the ASN source address (set source address), source filter (add remote addresses)? Maybe also add that for sending, you specify the group as remote and the socket becomes write-only? Should we require specifying direction? This would be slightly harder to use, but future proof in case someone implements reliable ASN on TAPS…\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\nlgtm"} {"_id":"q-en-api-drafts-c9dbd0380f3262177e152bd0ff1f8af744343f6e541586d0b9ac9f26c1d387bb","text":"I'm not quite sure what I meant by \"Only full coverage is guaranteed, any other requests are advisory\". I assume we mean that even if a partial checksum is requested you get full coverage because requesting partial coverage and getting no checksum doesn't seem right...?\nExactly, and you made it clearer. Thanks!"} {"_id":"q-en-api-drafts-0177d5a9f8c58e5f01264ea4e2152e4bea9dc295e43bac2f5feea156e68b5f37","text":"In the \"Specifying Endpoints\" section, added a reference to the Selection Property that does a similar thing but with more fine-grained control. It wasn't clear to me what is the best place to add this, so I put it right below the first example of specifying an interface name for the Local Endpoint to say that you can also do it differently. If you have better suggestions, please let me know. To the ListenError and RendezvousError, I added text mentioning the \"reconcile Endpoints and Properties\" pitfall. I think this is probably more general, i.e., there might be more sources of errors than just local interface, so I kept it generic.\nFor Local Address Preference we only have the option to prefer stable or temporary. However, I guess we would also need a way to e.g. specify that stable is required, no?\nIs this addressed by PR ?\nI don't think so.\nWhen reading the current API draft, I was wondering the following: If an application wants to specify one or multiple local network interfaces, it can either set the Local Endpoint(s) or it can set the \"Interface Instance or Type\" Selection Property. I think both ways are valid and should exist, but I'm wondering if we should specify anything on the interaction of these two ways to do a similar thing. For example, should we explicitly mention the Selection Property in Section 5.1 (\"Specifying Endpoints\"), to make readers aware that there is a \"better\" way to do this, which allows more flexibility? Also, if an application specifies conflicting things, I guess this leads to an InitiateError, ListenError, or RendezvousError. Our current text already says this for the InitiateError: But should we make this particular issue more explicit here and/or should we add it for the ListenError or RendezvousError?\nI think so, yes! Regarding the error in case of a conflict: I think so, yes - to all your questions: make it more explicit, state it in the description of both the ListenError and RendezvousError. Just my 2 cents\nThank you - works for me!"} {"_id":"q-en-api-drafts-1ac93aa9abb2275910548f3cb4bf8c0e6b8eb6551104b9c74060973140f77a77","text":"Addresses\nIn the architecture document, I noticed that we sometimes say \"Transport Services system\" and sometimes \"Transport System\". Are these the same? Are \"Transport Services implementation\" and \"Transport System implementation\" the same, too?\nI prefer \"Transport Services system\" and \"Transport Services implementation\"\nI find it long and clumsy :( Side note: we'll have to rename it everywhere in the API draft too if we go with this.\nwas this fixed by ?\nYes it was. We already used Transport Services most places so it's not all that different.\nLooks good, thanks!"} {"_id":"q-en-api-drafts-4d3e294f69bd8442243d7542dc15c0aa615314175f6f9e685a134492c699cc0c","text":"this should\nRight there was a typo: Replacing the last 2 sentences, the last sentence ought to be: \" Endpoints should avoid IP fragmentation ({{!RFC8304}}) and when used with transports running over IP version 4 the Don't Fragment bit will be set.\" These are two separate issues that a stack implementor has to decide to do: (1) Decide if the endpoint will fragment; (2) decide if the path can fragment the fragments. In some OS this will be bundled together, in other OS they will not.\nDiscussed on the interim call. The name probably could be much better. We also want to model in the text the atomicity property of UDP transfers.\nThis text is broken: section 5.1 / o Singular Transmission: when this is true, the application requests to avoid transport-layer segmentation or network-layer fragmentation. Some transports implement network-layer fragmentation avoidance (Path MTU Discovery) without exposing this functionality to the application; in this case, only transport- layer segmentation should be avoided, by fitting the message into a single transport-layer segment or otherwise failing. Otherwise, network-layer fragmentation should be avoided--e.g. by requesting the IP Don't Fragment bit to be set in case of UDP(-Lite) and IPv4 (SET_DF in [RFC8304]). / we need to rewrite\nWaiting to see exactly how DPLPMTUD finishes WGLC ... will be ready for text in Jan.\nReads much better except the last broken sentence"} {"_id":"q-en-api-drafts-8f1d2ee4336f0d0f42533e7fdc604138064cd5d73c5cd7e154b65a0a7b998316","text":"Fig. 4 isn't fully consistent with sec. 12 in the API document. A few things that stood out to me: InitiateWithSend() is missing. I can see that this isn't trying to cover everything, but randomly leaving some things out makes it all a bit confusing IMO... \"Connection Received\" reads as if it should really be the \"ConnectionReceived<>\" event in the API doc. So I would put these events there and use the \"<>\" to indicate them. Same with \"Closed\"... this is actually an event, so why not write it as such, with the \"<>\"? RendezvousDone is missing... should we include all events or not? Of course that would then also require adding Sent, Received, ... maybe it would be better to indicate that events are events, but put a statement somewhere saying that not all events are shown. In the same vein, I would replace \"Connection Ready\" with \"Ready<>\" \"Conn. Finished\" reads as if it might be an event, but this event doesn't exist... I would just remove this\nI've added InitiateWithSend(), and removed Conn. Finished; you're right, these are confusing. This diagram does not use the API draft's event notation, and I'm not sure it makes sense to introduce it here -- it would require a bit more expository text only to make a diagram that is an attempt to provide a high-level view provide a... less high-level view. (The expository text now says \"... some actions ... have been omitted for brevity and simplicity\" and I think that's the right approach.)\nAgreed!\nlgtm"} {"_id":"q-en-api-drafts-9c078aadc787a53031d1b3763c12f03215705f6109d4d7003e839a995d86039c","text":"Following the discussion in , explain why TCP UTO is included as a Protocol-Specific Property, while other properties are not. Also, some minor consistency fixes.\nThanks for doing this!Thanks"} {"_id":"q-en-api-drafts-c72009befedb31eb3abeca75d44370663fd328ac768feecf4629c7ea3889ac08","text":"... as suggested by NAME\nThe section on partial send says: \"All data sent with the same MessageContext object will be treated as belonging to the same Message, and will constitute an in-order series until the endOfMessage is marked. Once the end of the Message is marked, the MessageContext object may be re-used as a new Message with identical parameters.\" Can we reconsider for a second if that is really the way we want to design this? This is really implicit and therefore an easy source for errors. I guess we could alternatively use an explicit message identifier...? Or the send action always return an identifier and for subsequent sends we input this identifier again? Or someone has a smarter idea?\nI don't have my head buried in code like some others here (NAME NAME , ..), so I guess that these folks should speak up - but just from reading this, I agree that the current design sounds like an easy source for errors.\nI'd prefer to just remove the part that mentions reusing the same object. That's an implementation detail (and a perf optimization). The rest seems correct."} {"_id":"q-en-api-drafts-3374e3c999b4b1da94aa7ed71962174f002e516441ee1cee855fc5100625de7a","text":"Fixing three oddities in the abstract, issue\nOdd English: /where endpoints have multiple interfaces and potential transport protocols to select from/where endpoints could select from multiple interfaces and potential transport protocols./ We use the words /abstract programming interface/ and the acronym /API/ but never tie these two together. I am not huge fan of: /lowest common denominator interface/ ... as an engineer, this sounds like a good thing, although /lowest/ isn't that clear, so I wonder if we can find a better term?\nResolved in PR\nLGTM modulo one nit"} {"_id":"q-en-api-drafts-285cba6400091ce8762f5c70e6caff5cd7c35443d77c59397ea5ccd4f3e189c0","text":"Changes to address Issue\nMessage Properties This isn’t really about /removing/ we need to express this differently, not sure what you wish to say: For example, TCP cannot remove bytes from its send buffer, while in case of SCTP, such control over the SCTP send buffer can be exercised using the partial reliability extension [RFC8303]. and: /Final: when this is true, it means that a transport connection can be closed immediately after its transmission./ I think this is: /Final: when this is true, it means that a transport connection can be closed immediately after transmission of the message./\nSee PR Update draft-ietf-taps-URL\nLooks good, just some more nits on top"} {"_id":"q-en-api-drafts-a27a4dcab43e3e757706998ebb302a4bd77d164aff3c4c688db909f54c2efc2d","text":"I think this helps, but one of the confusions is the distinction between a framing to provide a message-oriented API, and the use of framers underneath that API to change the representation of messages on the wire. I wonder if some of the difficulties we've had with framers is that some of us have been using the term to represent one of these concepts, and some the other? One is the concept of a framer that accumulates data and returns an object from a call, rather than returning a byte array containing a (partial) HTTP response. That's the API-level framer. The other is the sort of framer that performs TLS encryption on the HTTP request, after has been called. This transforms data, below the API. Do we need two different terms?\nNAME - personally, I always thought that Framers can do both of these things; if this wasn't clear enough before, then I think your part 1 (returning an HTTPResponse object) was clear from before, and now this PR makes it clear that they also intercept the send call to transform data afterwards. So IMHO that's just the clarification that was needed.\nSorry! Obviously I only wanted to comment, but hit the wrong button :)\nI know we moved some text about framers into the implementation doc but that really doesn't seem right when I look at it now. The section about framers in the implementation doc is the only section that defines an interface which probably is already an indication that just moving this text there was not right. However, also the text in the API doc seems now incomplete. I think we should move it back and rather define some real simple examples framers in the implementation document (just a prefixed length field (as many UDP mapping use) or maybe even an example implementation for STARTTLS). Further after moving it back we could figure out if really all parts of the interface are necessary or if there would be a more minimal interface that we could use. E.g. I'm not sure about the FailConnection() call or at least I think this could need more explanation.\nAlso I think it would be good to the section on framers earlier in the doc. I will open a separate section to propose some restructuring (as I also think it would be good to have the connection properties earlier).\nI think the real problem with framers is that we left them vague to avoid having to worry about all the implications they bring with them (which are a lot as they impact almost every stage of the life of a connection) but also to not limit what implementations can do with them by making them very rigid. But I do agree that the text in API feels somewhat incomplete. Maybe one solution would be to explain what a framer should at least be able to do in the interface draft without specifying the actual API that is in implementation. I think we all agree at when a framer should be able to interact with the connection just not how that interaction is supposed to look like (as that is highly implementation specific).\nI disagree that we should move the text back into the API document, since the group was very clear before that the details should be moved to implementation. The boundary we defined was to leave in the API: (a) the definition of what a framer is and why it's used and (b) the interface an application uses to add a framer to the connection. The API text should be complete with regards to how a user of a framer performs actions. What specific text would we want to add here?\nFirst as I say that's the only piece of additional interface we define in the implementation draft which already is a sign that it's maybe not ideal. But for the API draft the part of how a framer interact with the transport is really unclear and looking at the part of interface, that now defined in the implementation doc, makes it really clear. I disagree that this part of the interface can or should be left to the implementation. The fact that you have to call the framer before you start any transmission and the events a framer should be aware of are generic and nothing implementation specific.\nIn the implementation draft we say: \"A Message Framer is primarily defined by the set of code that handles events for a framer implementation\" My assumption is that a custom framer implementation could be part of the application logic and therefore we need to define this interface as well. However looking at the implementation draft I guess I now confused myself about how this actually works. Why is the sending logic asynchronous? Isn't this something that happens iteratively when the application calls send(). How would that actually work? I thought the application could just call URL() and then the connection could call URL() and that would simply return a modified message... can you give an example for sending and receiving based on the interfaces currently defined in the implementation draft? Also please note that while I think the interface definition belongs in API draft, however, there are comments about copying and curser handling which should/could probably say in the implementation draft.\nInterim: Add to -interface: Explain that framer(s) is in between the rest of the protocol stack and the application Intercepts calls to start/stop and send/receive And just leave as is, and have another framers document in the future once we get more details.\nI think it'll be helpful to be a bit clearer on the timing here, see comment.Thanks a lot for doing this - to me, this looks really good. It makes things much clearer for readers of this document.I think this should clarify it, thanks!"} {"_id":"q-en-api-drafts-cea08b6f1a3546ccf55f6b00c9a80cfc187a9bddabe2dc83d596e379b0b73fb6","text":"sec 4.2.2 \"implementations MUST provide a method for the application to determine the entire set of possible values for each property.\" Is this meant to be a requirement for a machine-readable API for learning the enum, or merely a requirement for documentation? i.e. is it satisfied if the values are in the man page or comments of the C header file?\nAs a by-note, isn't this a great place to use \"REQUIRED\" and say whether it is required to specify or required to support :-).\nProposal: Just remove \"however, implementations MUST provide a method for the application to determine the entire set of possible values for each property.\" This is too implementation- and language- specific."} {"_id":"q-en-api-drafts-4c3df4858a723f8d628405a31d6a26a110b5443fba930934806f4a8873f86345","text":"There are many ways to do this, but this one mimics the \"private browsing\" paradigm\" as I understand it.\nCan you say more? I read as specifically about flushing state, not Connection Group vs. Connection issues generally. This is not literally \"flushing\" the state but I've update taps-interface to reflect this design while, I think, preserving the spirit of the original text.\nConnection Groups used to be about state and context sharing as well as entanglement. Now it's about entanglement. We need to rename Connection Groups (\"connection contexts\"?) in the architecture to replace the existing \"group\" concept, and to retain \"connection group\" in the API for entanglement.\nSeems like this could be 360 degrees of rotation in our thinking? (which could be OK, NEAT was written to specify both contexts and flow groups). Let's try some definitions here and see if we converge...\nI think brings this back in synch again so no need to rename, possibly the last part of this section in architecture should be removed though as we do not have fine-grained control\nSection 9.1 of taps-impl loosely refers to the need for an interface to \"flush protocol state\" to preserve anonymity, but such a capability is not in taps-interface.\nI don't like this text: \"Applications must have a way to flush protocol cache state if desired. This may be necessary, for example, if application-layer identifiers rotate and clients wish to avoid linkability via trackable TLS tickets or TFO cookies.\" Those of you who know me, would immediately see a \"MUST\"...\"if desired.\" and a \"may be necessary\". My suggestion is that this digs too deep, and we should remove the text. I could also live with something more neutral though: Note: There are case where applications need to flush protocol cache state, for example, if application-layer identifiers rotate and a client wishes to avoid linkability of trackable TLS tickets or TFO cookies.\nI'm fine to have the more neutral sentence proposed by Gorry. I don't think we should specify an interface for it but I think it's good to mention.\nI agree that the best approach is to make it more neutral. The other option would be to say that this interface is implementation specific. For sure, we do not want it in the API I think.\nI have no opinion on Gorry's comment, but I'm a bit mystified by the \"shouldn't be in the API doc\" argument. Perhaps I'm not understanding the purpose of taps-interface? I thought it was a qualitative description of the controls offered to the application, which this clearly would be!\nI would define taps as an interface to control network connections. So flushing state is a good thing in some scenarios for privacy but not really needed to handle network connections.\nSee also -- this surfaced that we have drifted on connection group versus context, which we need to fix in arch.\nMartin to write a PR against API to propose a context API that would provide the intent of this functionality. Alternative is to NOP this in the API doc (\"connection contexts are too implementation specific to expose in a general way\" or similar)\nA second alternative is a one-bit fix: a selection property that requires a given connection to be in a separate connection context (which implies a separate connection group) from other connections created in the preconnection.\nA very nice approach IMO, thanks for doing thisI like this approach for the simplicity, but it still solves only half of the problem we have with connection groups vs. connection contexts. I am fine with merging this, but IMHO it does not fully Thanks NAME looks good to me now."} {"_id":"q-en-api-drafts-8f9c56f87f5a245d0f1324150698bdd92ddbabf65cfdc43b9d4bad77262dc6cb","text":"Note, the change to the API draft here is a purely editorial detail that I fixed here in passing: the multipath selection property was renamed and this fixes the text referring to it.\n\"what we discussed\" - you mean in , where I say that I stumbled over a sentence? This PR does update that sentence (simply removing \"with migration support\" from it because any multipath protocol can do that).\nno, i meant what we discussed at the interim about using the handover, interactive, aggregate terminology. but i saw that this was not so easily done and we are all happy with your solution, so all good :)\nBefore distinguishing between protocols that do \"multipath\" and \"migrating between paths\", it might help to define these terms. This is tripping is up in QUIC, and IIRC the original \"Multipath TCP\" was pretty much the migration case. Something like, \"In this section we distinguish between 'multipath' protocols and protocols that allow migration between paths. The first means ...\"\nThis is in section 7.2 of taps-impl.\nWe define multipath (by policy in section 6.1.7 handover, interactive, aggregate) in the API doc. Section 4.2 in Implementation should use these definitions.\nI wanted to fix this now, but stumbled over the following sentence in the implementation draft: This talks about the protocol below the transport services system; do multipath-supporting protocols with no migration support exist? I struggle to imagine a scheduler that wouldn't make this happen... e.g. lowest latency scheduling, but with the exception: if a path becomes completely unavailable, keep going? :-) Looking at the three policies in section 6.1.7 of our API, clearly, migration should work with all of them (and \"Handover\" is limited in that it doesn't do anything else, but it's gotta be a subset of the others anyway, I'd say). Am I missing something? Just checking...\nI think the example of no migration support would be a multipath protocol that is not able to add new paths, so SCTP without support for the ADD_ADDR extension perhaps? Not sure this is a case we need to worry about though...\nAh, a multipath protocol that doesn't do multipath :-)\nWell, it can do multipath with the paths you give it from the beginning :) Possibly something else was intended with the text from the beginning, but this is what I could think of...\nLooks good to me!Thanks for fixing the nits!"} {"_id":"q-en-api-drafts-0046b69a6cc7f929e402b071fcd5400eb57c489951192ea129283b830f93f4db","text":"Section 7.2.6. says \" for example, if a transport stack that only provides a single in-order reliable stream is selected, prioritization information can only be ignored.\" I don't think this is entirely true as you can still decide in which order you schedule independent messages on one stream.\nI don't get this. The sentence says \"in-order\". Scheduling them from buffer A into buffer B in accordance with priorities won't help when everything leaves buffer B in a fixed sequence.\nHow about a less complex sentence: ...\"for example, if the transport system selects a stack that only provides a single in-order reliable stream, the prioritization information is ignored.\"\nmessage on the taps \"layer\" are independent. so the order an application calls send on these message doesn't have to be the same order the messages are send on an \"in-order stream\"\nOh, now I see what you mean. Hmmm... looking at the text, it seems that we could just delete this sentence and the problem is solved?\nI think what was intended with the example was to show that the set priorities may not have any effect. Perhaps we should instead just add \"and the set priorities may not have any effect\" if we want to be explicit? Removing it also works though.\nI think just add that!\nI'm for removing it (I think I wrote this sentence btw :-) ). Giving this example just throws off someone who thinks about this case like Mirja did here - even if we say \"may not apply\", the reader may (correctly) think \"but why, in this case they CAN well apply!\". So I don't find that helpful. For context, the full current statement is: and I believe that we say everything we need to say if we just change this into:\nI like simple examples, this one wasn't great so this is OK."} {"_id":"q-en-api-drafts-1e0a36d58a00729c55f86464a3fd92c1f65eca23f03854096d719c7bd5210c05","text":"I am sure this has been discussed, but reading now it seems strange to me to use \"Prefer\" as the default for a Selection Property that affects how the application interacts with the API. Using \"Prefer\" means that I do not know what service I will get when using the default settings? In this case I do not know if I get messages or not if I use the defaults.\nThis should be Ignore as a default."} {"_id":"q-en-api-drafts-b1d0fba076cf81ebb23e7332e615c83fe0d42f3f47832528831b54df52b9e009","text":"Interface, Section 2, \"A unified interface to datagram and connection-oriented transports, allowing use of a common API for connection-establishment and closing\". Should this be \"…to datagram and stream-oriented transports\"? At what point do we explain that and are also used for connectionless protocols? This change would finesse the question here, leaving it for Section 3, or we can restructure the draft to clarify earlier.\nYes to (1): Datagram protocols can be connection-oriented or not; however the point here seems to be either: /interface to datagram and stream-oriented transports/ or /interface to both connection-less and connection-oriented transports/ ... I'd favour the first, it's clear enough and doesn't use terms that we later re-use. Also to (2): I think we could insert a specific sentence after: \" A Connection represents an instance of a transport Protocol Stack on which data can be sent to and/or received from a Remote Endpoint (i.e., depending on the kind of transport, connections can be bi-directional or unidirectional). \" That explicitly says something like: \"This connections are presently consistently to the application, irrespective of whether the underlying transport is connection-less and connection-oriented.\""} {"_id":"q-en-api-drafts-a01d78373105caae9b3eec84cbe88ae8783860f0ad78bef17629987d1caf47c6","text":"The text below is a bit confusing as there are no details about how to to send replies and map responses to their requests in Section 7.1.1. This can be easily fixed by clarifying what Section 7.1.1 provides details on. But I was also wondering if there should be some more details on how to send replies and map responses to their requests somewhere?\nNote: this will be in §9.3.2.1. after merge.\nIs seems the concept of mapping requests/replies has been fully eliminated. So just removing this part of the example.\nThanks Philipp, LGTM."} {"_id":"q-en-api-drafts-d8162328cad464bd03b7a68b32431688cb8b8d6f8d07b8d524ea06e23d792237","text":"This adds SO_reuse commentry on the multicast UDP service. It seeks to\nThe implementation document should explain IGMP joining and SO_REUSEADDR for multicast\nText blob for thought or incorporation: In some uses, it is required to open multiple connections for the same address(es). This is used for some multicast applications. For example, one connection might be opened to listen to a multicast group, and later a separate one to send to the same group; two applications might independently listen to the same address(es) to send signals to and/or receive signals from a common multicast control bus. In these cases, the interface needs to explicitly enable re-use of the same set of addresses (equivalent to setting SO_REUSEADDR in the socket API)."} {"_id":"q-en-api-drafts-e7112c5b7a85ea9c890b19ddc7144e39b5730c1055307ca8743db2354af6cc43","text":", , , , , .\nIn 8.1.1 the second sentence is convoluted. Is there a way to restate it without as many nots, nons, an different zeros?\nDo you really mean \"host\" at \"(endpoints) SHOULD represent the same host\" in section 6, or do you mean something more like service, given that an endpoint may configured with a DNS name?\nactually we mean service, good catch.\nThe \"reformatted where necessary\" part of the requirement in the last paragraph of 4.1 makes the MUST NOT almost impossible to conform to. I think I understand what the paragraph is trying to say, but the current formulation leaves the reader guessing. I think you are trying to say \"Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor or implementation-specific property name\".\nThe first sentence-paragraph of section 4 is complex. Please break it apart.\nhas a comment that's too long for the text version\nIn the text version of the document, the last paragraph before 3.1.3 has some markup (~~~) that has leaked through.\nAs you can tell, I don't think the documents do what the first sentence of the API summary claims. This document would not lose anything if you deleted the sentence.\nAgreed, we can lose this sentence.\nWhy? Removing this makes no sense to me.\nFor one, it's redundant - it only repeats what we already say earlier twice.\nAh, I see - so just this one instance..... then go ahead, we don't need to say something twice."} {"_id":"q-en-api-drafts-cf6cb0a84bbdd56992c2430a010900c879e11665eaf7d9f68464ae53bc11b76e","text":"In (Arch) section 3.3, you are talking about racing before you've described what it is. At the point where you note that racing should only happen over stacks with identical security properties, note when doing so might introduce privacy issues and what can be done about them.\nDescribe what you mean by caching and why you might want to isolate sessions earlier."} {"_id":"q-en-api-drafts-f8e7fe4902326290afbf688f406fea9824a07e6c9ae47c8e110ae6e064b3ee24","text":"Fix to Issue\nWe don’t mention layers in the text before: “A violation of any of the layers should cause” Is this better as: ” A violation that occurs at any of the policer layers should cause”\nFixed"} {"_id":"q-en-autonomic-control-plane-1dd4a3ac7926bcbc26801cd5171ebf8aee21ffb6d7923fe06b890b70b226ac3c","text":"During IETF 106 I solicited some feedback on the ACP IPsec/IKEv2 usage from Valery Smyslov; I've tried to digest that feedback into new text here. Some aspects are still subject to change. I also made a minor tweak to the DTLS section to bring it into parity with the IPsec one.\nMerging first, then fixing."} {"_id":"q-en-basic-yang-module-afeca7b6d0c19fcb03973e0be1f014be02b5d89dc98fd854788d4a480d716cd1","text":"Provides more context around the various elements of the model\nThanks a lot for your stab at Security Considerations! Also the I-D was in dire need for some expositional english text. Reads straight forward and comprehensive."} {"_id":"q-en-brski-cloud-e7d7cc47fa6d0c63d96bba8d032655d3bddb2ec0e9721d55af7cf5900f08d844","text":"Not quite. It specifies secure bootstrapping of the individual nodes. It's RFC 8994 that bootstraps the ACP."} {"_id":"q-en-brski-cloud-32c44735837e01e2d57ef43bd1453498973e8cc6979c52ae65b8784e6e6e81cc","text":"I find the \"??\" in the figure confusing. The Cloud Registrar and the MASA could just be shown as adjacent boxes; the explanation in the text is fine. That looks ugly as a single pseudo-sentence, and I'm not sure what it means. A more complete explanation would be good."} {"_id":"q-en-capport-wg-architecture-09dbee4ce4db45bfb81fb48bbb4c2ba3508b76bf28590d20d15686e5112c61b1","text":"This replaces all mentions of capport (other than document slugs) with \"captive portal\". Note: this change includes the title of the document.\nJoel Halpern writes, We should do this.\nOr erase all mentions of 'capport'"} {"_id":"q-en-capport-wg-architecture-b19787febcbfa6f33ad13f47904423b8b507ead48d27f44fddb5b6869b3b6dd2","text":"Multiple reviewers objected to the way in which we captured interactions with headless/UI-less devices. Clarify the document's intent. Do this by removing language that indicated that we would eventually get around to adding the requirements, and specify explicitly that we are not handling devices without UIs.\nEric points out in his review: He suggests as a solution: I propose we do this. The first mention is in section one. We can say something along the lines of \"A future document could provide a solution for devices without user interfaces. This document focuses on devices with user interfaces.\"\nOn page 4: Joel Halpern writes,\nWe might want to avoid relying on the captivity concept so heavily, but this is a good change nonetheless."} {"_id":"q-en-capport-wg-architecture-2619cf4ae61f6d56df0629a0ce6c445dac052716a6d9c45ac0d31e160efeddc1","text":"We only use UE in one section. Everywhere else we use User Equipment. Expand UE in that section so we are consistent with our use of it.\nIt was pointed out that we sometimes use UE rather than user equipment (e.g. section 3.5.). Let's be consistent, perhaps shortening all references to UE other than where it is defined."} {"_id":"q-en-capport-wg-architecture-3419ea3f35d00324d4e61bbab5188868997355dc0135587d9f55550b89a043ac","text":"The idea of a session is important, both because it is used in the API document, and because it helps to tie everything together. Define it in the terminology.\nThis is a first stab at defining the session. I suspect we may need to iterate on it once or twice. Let me know if you think it deserves its own section on top of the terminology.\nThanks for doing this! Here’s a suggested alternate way to explain this. Feel free to modify as desired. Captive Portal Session: Also referred to simply as the “session”, a Captive Portal Session is the association for a particular User Equipment that starts when it interacts with the Captive Portal and gains open access to the network, and ends when the User Equipment moves back into the original captive state. The Captive Network maintains the state of each active Session, and can limit Sessions based on a length of time or a number of bytes used. The Session is associated with a particular User Equipment using the User Equipment's identifier (see ).\nYour phrasing is better. It manages to work in the session limits while keeping it about the same lenght. I've taken it almost verbatim. NAME Do you have any input on this topic?\nFrom Ben's review of the API document, we should define \"session\" formally in the architecture. The architecture doc mentions it in passing once already. The API doc uses the term session much more."} {"_id":"q-en-capport-wg-architecture-0cdd7b84095e0edfb504d88e7826c05a6f7596a9d0e0a7d0ad6b0e9387ce0986","text":"The language of this section was hard for some of the reviewers to understand. Rephrase it to be more explicit and clear.\nYes. We should probably rephrase this. E.g. \"The user makes an informed choice in deciding to visit and trust the Captive Portal URI. Since the network provides Captive Portal URI to the user equipment, thenetwork SHOULD do so securely so that the user's trust in the network can extend to their trust of the Captive Portal URI. E.g., the DHCPv6 AUTH option can sign this information.\""} {"_id":"q-en-capport-wg-architecture-e1b46b2556797d6094a3adf10a4deb44d5bc58db16c209ab2cf171cfa536e904","text":"Attempting to fix issue . I attempted to reword the principles, adding more prose for the rationale. Also upgraded some SHOULD NOT to MUST NOT because it seems some people thought the SHOULD NOT says the existing approach is OK. These are just some ideas. I won't be offended if rejected.\nI believe I addressed all comments. You'll probably want to review the complete diff again.\nBenjamin writes in his review: Michael replied with this suggested text: This seems reasonable, as I mentioned in the reply. We should go with it, unless anyone has an objection. Also, attribute it to Michael Richardson.\nS. Moonesamy writes\nOne approach I suggested in my reply to the feedback:\nThis is good. Thanks for sorting through this; it's important that this is right, and you have cleared it up a lot.Much clearer. Thanks!"} {"_id":"q-en-capport-wg-architecture-e1f867db997f2f690b499bdcd2bb48c17fbba5644e434bfc19bc457eef36028c","text":"One of the reviewers asked for more information about the attacker in section 7.3. This does so, describing what the attacker could achieve by compromising the security.\nBenjamin writes, Without knowing the details of the particular solution, it's a bit hard to say for sure, but roughly I'd say it's someone who wants to interact with the API using the identity of the user. E.g. if we're using an 'unguessable' URI, an attacker snooping on the communication with the API could determine the URI, and use it."} {"_id":"q-en-capport-wg-architecture-3e1bcbd58280b13238d25caf661a75e494ba7cc1da5a9af17929ccaa56bdfc5a","text":"The working group has decided that we do not need to tackle specifying the signaling protocol. Rather than leaving the document with it partially specified, remove most of the text related to it. Keep some basic examples of constraints. This also removes the text which claimed that the document would specify rqeuirements for the protocol. It does not. Made a few related editorial changes.\nAs suggested by NAME : Do a pass through the document with this text in mind and make sure it makes sense. Clean up anything that may have depended on removed text.\nThanks for putting this together. This seems sufficient, and brief."} {"_id":"q-en-capport-wg-architecture-dbd2645d95da5e111c80724fa6af521c0bec42f83788cf07316ebfbca2bb0d72","text":"We want to make TLS a requirement, not a suggestion, so I've chagned the wording around TLS to reflect this. I also added a section to the security section discussing the motivation behind the requirement.\nCan you forward to the list please?"} {"_id":"q-en-capport-wg-architecture-29971fb1c60487cf8ccf9e07d5b8f260c1e1a0e2487206a250d6231da556c9db","text":"We need to discuss what information will be used to identify the UE from the perspective of the various components in the system. This updates the draft to discuss this concept, including desired properties of the identifier and some examples. This aims to address issue 5 (URL)"} {"_id":"q-en-coap-tcp-tls-594c1616b2abacd841b70c457dfb2f11b97fbc546a125b82e25ff5ca90052894","text":"URL CoAP over WebSockets The following now seems unrelated: Another possible configuration is to set up a CoAP forward proxy at the WebSocket endpoint. Depending on what transports are available Figure 12: CoAP Client (UDP client) accesses sleepy CoAP Server (WebSocket client) via a CoAP proxy (UDP server/WebSocket server) s/sleepy// (There is no mention of sleepiness here.) CoAP over WebSockets is intentionally very similar to CoAP over UDP. Therefore, instead of presenting CoAP over WebSockets as a new protocol, this document specifies it as a series of deltas from [RFC7252]. Probably best to delete that paragraph -- it is true not only for WS, but also for TCP. So why only here? 3.1. Opening Handshake Hmm, the title is confusing. WS also uses the same opening handshake as 2.3. The parts that relate to both TCP/TLS and WS/S need to be factored out. 3.2. Message Format The message format shown in Figure 14 is the same as the CoAP over TCP message format (see Section 2.4) with one restriction. The Length (Len) field MUST be set to zero because the WebSockets frame already contains the length. s/restriction/change/. The CoAP over TCP message format eliminates the Version field defined in CoAP over UDP. If CoAP version negotiation is required in the future, CoAP over WebSockets can address the requirement by the definition of a new subprotocol identifier that is negotiated during the opening handshake. This paragraph is a bit weird. Maybe: As with CoAP over TCP, the message format for CoAP over Websockets eliminates the Version field defined in CoAP over UDP. If CoAP Empty messages (Code 0.00) MUST be ignored by the recipient (see also Section 4.4). Why are we saying this here and not in Section 2? Factor out. 3.3. Message Transmission Similar as with CoAP over TCP, Retransmission and deduplication of messages is provided by the WebSocket protocol. CoAP over WebSockets therefore does not make a distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. Connection Health There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary. (See the discussion we already had.) [Brian] - The reference is\nClosing the Connection Does this mean, only the client must cancel the requests? Or should the server do this also? Then a clarfication may be useful.\nThe original - URL - was specific to Observe: ~ All updates to Observe are now captured in a single place - Appendix A. This specific case is in URL At this point, I think that Closing the Connection could simply be deleted because there are no specific changes required.\nLet's say a client has sent one or more requests to a server but hasn't received any response yet, and/or is observing a resource at a server. If the client or the server closes the connection or if the connection goes down unexpectedly, then the client will never receive the responses or any further notifications. The client has to reconnect and send new requests. Neither the client nor the server needs to send anything to cancel active requests before closing the connection; when a connection closes, all active requests are canceled automatically. The client should signal that to the code waiting for the responses, e.g., by throwing an exception. The server should abort processing the requests or at least discard the responses once the requests have been processed. The server also should remove the client from the lists of observers (at latest when it tries to send the next notification and it notices that the connection is down).\nURL General - It's not clear \"how\" mandatory the use of the CSM is. Clause 2.3 on TCP/TLS indicates that the CSM message must be sent at the start of the connection. Clause 3.1 on Websockets makes no mention of CSM messages. Clause 4.3 indicates that the CSM MUST be sent at the start of the connection also and makes no distinction between TCP/TLS and Websockets. Clause 2.3 also indicates that the connection must be aborted if the CSM is missing or invalid as the first message on the connection. Given the discussion about address changes does more information need to be provided what start of connection means?\nIn the , the intent of this statement is unclear: There is no way to retransmit a request without creating a new one. Re-registering interest in a resource is permitted, but entirely unnecessary.\nResponse from Klaus Hartke: In general, CoAP clients are not allowed to use a token while it is still in use. RFC 7641 defines an exception: a client is allowed to confirm its interest in receiving notifications for a resource by sending a request identical to the original registration request, including the token that is already in use. The above paragraph just says that this is also the case for CoAP over TCP/TLS/WebSockets, but points out that sending a Ping (which checks the health of the connection and hence of all active observations) is better than confirming the interest for all active observations individually. Response from Carsten: Yes, the text is much easier to understand in that context — in the old draft, it was clear that this talks about reregistering interest. So in the new text, we probably need to prefix a sentence like For use with UDP, RFC 7641 discusses ways to re-register interest in a resource to deal with the case that the server providing this resource might have lost the information about this interest. In CoAP over TCP, the framework of the current connection provides fate-sharing between the health of the connection and the health of the expression of interest.\nAfter further review, I think that this note belongs in Appendix A with the other updates/guidance for Observe on reliable transports. Connection Health should limit its discussion to WebSocket specific changes - in this case, the use of the CoAP Ping Signaling message instead of WebSocket Ping/Pong."} {"_id":"q-en-coap-tcp-tls-3d7794cd646cd3fb57d222f9b1e3287b332f124116886281b812b5346736b242","text":"URL - WGLC review by Esko Dijk Page 20: Alternative-Address Option - the semantics of including multiple of these options is not defined. Can the receiver pick any of the addresses? Or does it need to try the first address first? And then keep trying all alternatives until one succeeds, or not?\nURL Unless Carsten or Hannes (authors of draft-bormann-core-coap-sig-02) have a preference, I would suggest that the addresses are unordered (pick any)."} {"_id":"q-en-coap-tcp-tls-efc28484cba428eea15f6bceaea633cbe03a57ff3caa7c33069ebb4f5aaf5351","text":"Do we need to include the RFC7252 note for emphasis?\nProbably not, as we are not changing anything about 5.10.1, are we? (But then 7.5 also points out a place where we maintain a property of 5.10.1.) ((But then, again, where would be put this?))\nWe need to identify the ABNF fragments as ABNF as in so they become easier to check in BAP.\nWe could add the note to Section 7, right after the new reference to \"fragment\":\nMerged in abnf annotations from\nis relevant to Section 7.\nGood point. So I assume applying the same fix (add »[ \"#\" fragment ]«) four times to the ABNF in 7.1 to 7.4 would fix that? (We probably want to line-break before path-abempty in all four cases.)\nIf would be helpful to have more context for how the fragment identifier is used in CoAP. I don't see much detail in RFC7252 besides: Note: Fragments ([RFC3986], Section 3.5) are not part of the request URI and thus will not be transmitted in a CoAP request.\nFragment identifiers are not used in a transfer protocol; their use is a local matter on the client side. The mistake we made in RFC 7252 is providing ABNF that doesn't show fragment identifiers at all in the URI syntax. While CoAP as a protocol does not care about them, they are intended to be allowed in CoAP URIs so they can indeed be used on the client side. The note mentions this, but the ABNF does not reflect that. Same for coap-tcp-tls. Does it matter? The meaning of fragment identifiers is defined by the media type. So far, there has been little use of fragment identifiers in the media types being used with CoAP. Now, SenML actually does go ahead and defines a meaning for them, and other media types in the future might as well. So it would be good if the ABNF in the CoAP standards did not give the impression fragment identifiers cannot be used at all with CoAP URIs.\nNote from Carsten: We may have to reopen that issue based on the about-face that the HTTP WG made here:\nFrom URL which references: Absolute URI Some protocol elements allow only the absolute form of a URI without a fragment identifier. For example, defining a base URI for later use by relative references calls for an absolute-URI syntax rule that does not allow a fragment. absolute-URI = scheme \":\" hier-part [ \"?\" query ] URI scheme specifications must define their own syntax so that all strings matching their scheme-specific syntax will also match the grammar. Scheme specifications will not define fragment identifier syntax or usage, regardless of its applicability to resources identifiable via that scheme, as fragment identification is orthogonal to scheme definition. However, scheme specifications are encouraged to include a wide range of examples, including examples that show use of the scheme's URIs with fragment identifiers when such usage is appropriate. and URL further states: Requirements for Permanent Scheme Definitions This section gives considerations for new schemes. Meeting these guidelines is REQUIRED for 'permanent' scheme registration. 'Permanent' status is appropriate for, but not limited to, use in standards. For URI schemes defined or normatively referenced by IETF Standards Track documents, 'permanent' registration status is REQUIRED. [RFC3986] defines the overall syntax for URIs as: URI = scheme \":\" hier-part [ \"?\" query ] [ \"#\" fragment ] A scheme definition cannot override the overall syntax for URIs. For example, this means that fragment identifiers cannot be reused outside the generic syntax restrictions and that fragment identifiers are not scheme specific. A scheme definition must specify the scheme name and the syntax of the scheme-specific part, which is clarified as follows: URI = scheme \":\" scheme-specific-part [ \"#\" fragment ] scheme-specific-part = hier-part [ \"?\" query ] On Jan 28, 2015, at 1:56 PM, Julian Reschke wrote: The answer is \"no\". URL I don't know how I got beaten into submission on that for RFC7230. ....Roy"} {"_id":"q-en-coap-tcp-tls-97f9cde8b397d8a6a6dd25eb153f8c6ae8a8e438cdc7ba01e117c519adfe7ade","text":"sig-02 did not define the length for options with uint values. Can you confirm whether 0-4 is correct? I note that there are variants in the RFC7252 options with uint values.\nGood catch. 0-4 definitely is right for Max-Message-Size. 0-4 is a bit over the top for Hold-Off, but arbitrarily limiting the time at, say, 65535 seconds (0-2) or 16777215 seconds (0-3) doesn't sound right either -- well, maybe half a year max (0-3) is about right. Option numbers are 65535 max, so Bad-CSM-Option needs to be 0-2."} {"_id":"q-en-coap-tcp-tls-fdd4bc0cd9c398b70eb7c6be576ae10b557f1cc7edc2c8247f9aaa211866b028","text":"This text does not address who might be allowed to wait for the other guy to make their statement (if both wait, there is a deadlock).\nThe intention is that neither side needs to wait. The only open issue is whether we want to allow the client to immediately start sending messages after it sends its CSM without waiting for the server CSM similar to the HTTP/2 case. I planned to discuss that on-list when I \"advertised\" this set of pull requests.\nFrom your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nWhen I submitted the original pull request, I also posed a question on the but did not receive a response. One open question is whether we want to allow a client to immediately send messages after sending its CSM without waiting for the server CSM. This would be similar to HTTP/2: URL To avoid unnecessary latency, clients are permitted to send additional messages to the server immediately after sending the client connection preface, without waiting to receive the server connection preface. It is important to note, however, that the server connection preface SETTINGS frame might include parameters that necessarily alter how a client is expected to communicate with the server. Upon receiving the SETTINGS frame, the client is expected to honor any parameters established. In some configurations, it is possible for the server to transmit SETTINGS before the client sends additional frames, providing an opportunity to avoid this issue.\nI support not blocking the device until the server CSM. The server can choose to not process messages that are invalid (e.g. too large) and immediately follow up with the server CSM. From the perspective of a cloud and gateway software provider, the CoAP (using TCP/TLS) server will usually be in the cloud or gateway and therefore usually not power constrained. I may be missing other scenarios where a battery powered device is acting as a CoAP/TCP server. Does anyone have plans for any of those scenarios?\nNAME added a related comment to (which is closed) From your message I understand that in HTTP/2 the client can send messages immediately without waiting for the server to respond either. To me it feels like we should be doing what has been done in the HTTP/2 case even though there is a risk that some problems occur when the client sends data in way that the server will subsequently not handle well. If that is the case, the client will have to re-send the data, I guess. For the currently defined options I don't see problems.\nMichel Veillette points out that a peer that wants to use a new connection has an uncertainty whether a CSM message with capability indications will come or not, so it is hard to decide when to send the first request (either based on default capabilities or waiting for the CSM to come in). One way out of having to do arbitrary waits would be to require the exchange of a CSM message (even an option-less one) after connection setup. The burden is very low, as a minimal implementation would just need to send a constant message of 0x00 0xe1 at setup. This requirement could be stated without giving a sequence, effectively allowing both sides to send the initial CSM in parallel, or it could give the listener the opportunity to not send its CSM until the connection opener has sent theirs, trying to avoid the need to answer right away to a connection setup -- this makes it too easy to scan for ports (OK, we also have a default port, and making scanning harder does not by itself provide any security, but it sure makes life a little harder for the attacker). (To avoid deadlock, the connection opener would not have the opportunity to wait sending the CSM.)\n+1 ... especially for Maximum-Message-Size. HTTP/2 has a similar flow for SETTINGS during connection including potentially empty settings: URL URL\nNo objections. I will create a pull request for review by the WG."} {"_id":"q-en-core-problem-details-4e804c23fc93511543f758947efafd2daf1a130013d0cae4444a172115c3d7df","text":"Start collecting changes from AD review\nComment by NAME : /example value 4711 not actually registered like this:/ ~~~ I don't get the sentence above - if this is to highlight that this is an example and that the value 4711 is not actually registered, I think this is redundant (and you could also add that in the Figure caption rather than in the example's text).\nSee : The danger that people are just blindly copying an example is just too high. So we would prefer to keep this redundant comment in. We could improve the wording of the comment.\nComment by NAME : Please explicitly state that the example (or examples if you add one in ) uses CBOR diagnostic notation.\nFixed in .\nClosed by 7463ce1"} {"_id":"q-en-cose-spec-d5730063d99c8bc80a04ed2f6d8968bcca80eb57e53ecfb5ddd8256e35c1f309","text":"It makes sense that we do not talk about creation time of the content. However the concept of having a time in the header is also very reasonable for things like countersignatures where one wants to make a timestamp as there is no content that the time marker can be placed in. We therefore change the text to make the time field be the time that the cryptographic operation was performed at."} {"_id":"q-en-cose-spec-2b2c8a076bcee466f4a64122038ec7f806163087642f2861943432c6adf8c2c4","text":"The document is updated to use the CDDL grammer from the latest version (-05) Fixed a couple of small errors where my thinking and my text did not match."} {"_id":"q-en-cose-spec-454e0ad2563909c7e11f00ecb5856a08b470b70a4ed832a55842d7f5104bfd65","text":"Assign integer tags to the CCM modes that we are keeping. Remove the 192-bit key CCM modes from the table. Update example using CCM"} {"_id":"q-en-data-plane-drafts-a24ebf5660ba617c1b06ad1225da371c684f4c546e1fa79f2354dd6843d122ca","text":"Sorry, now it is uploaded to the right folder. Simple accept pull request. Thx Bala'zs"} {"_id":"q-en-deprecation-header-c30733c920c1c9077a9cf21cb42f2c98767c94d9a840a69d75fe4a656a679a95","text":"Looks good to me.\n... are considered a 'bad smell'. Instead of a date vs. a fixed string, could we just say that a date in the past is equal to 'true'?\ni guess we could. and we already say that a date in the past means that the resource is deprecated. the motivation to allow a flag was to not force services to make up a date when they don't have one and simply want to flag something as deprecated. but we could also \"force\" them to make up a date by not allowing the \"true\" flag. i am fine either way and i do see the point that the syntax may cause some implementations to not parse the field correctly. maybe others can weigh in as well?\nFWIW, I see it like Expires -- if it's already stale, you just assign a date in the past.\ni could definitely live with such a design.\nOne use case was to align the header's usage with API contract as defined using OpenAPI. Having boolean helps align this header with OpenAPI's flag which does not require date. Forcing API developers to add a random date in the past in case if it is not available due toe various reasons would not be desirable for various other reasons (SLA?). Although, I wish that OpenAPI adds date for . >If the deprecation date is not known, the header field can carry the simple string \"true\", indicating that the resource context is deprecated, without indicating when that happened: On Jul 31, 2021, at 06:46, Mark Nottingham NAME wrote: FWIW, I see it like Expires -- if it's already stale, you just assign a date in the past.\nFor Expires, January 1 1970 (epoch 0) is widely used.\nNAME I wish OpenAPI supported deprecated date also. But when we do, it won't be with a property that can be boolean or date.\nOk NAME thanks for clarification on the OpenAPI side of things. I agree. We can just use date. There are some implementations out there, I would not know if any of those use boolean but since we are not an RFC yet, such a change should not be unexpected.\nit seems like we can safely remove the alternative syntax. i have created a PR asking for review by NAME\nLGTM"} {"_id":"q-en-dnssec-chain-extension-3b7570121589a526bf5d1838602659d65bd8bf32f68881d603ee160538d28465","text":"Small tweak to text to make clearer that validation RRsets can be in any order."} {"_id":"q-en-draft-ietf-add-ddr-beb1d7fdfcb2dd388f1ea523f1e0bf704b40a1f093218e6a97413a7da4719ae3","text":"changed equivalence instances for designated and added the possibility of multiple entities so long as they are cooperating This is to address issue\nI am not sure we need to introduce \"Equivalence\" nor to develop who is operating the resolvers - especially as many entities may be involved in the operation of a resolver. I am not sure we need to introduce Equivalence. If so I would propose the following text: OLD: \"Equivalence\" in this context means that the resolvers are operated by the same entity; for example, the resolvers are accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. NEW: \"Equivalence\" in this context means that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers. If Equivalence is not introduced - which I prefer -I would propose the following text: NEW: In this context the discovery process ensures that Encrypted and Unencrypted resolvers are either accessible on the same IP address, or there is a certificate that claims ownership over both resolvers.\nI agree that we do not need to introduce \"equivalence\" in this document. It is a remnant of the DEER draft DDR is based on. Per WG discussion, we want to use the term \"designation\" to describe the relationship between the unencrypted resolver and the encrypted resolver to which the client is transferring. I'm editing accordingly. I have also added references to the possibility of multiple \"entities\" as that is definitely possible. (PR coming later tonight)\ngood."} {"_id":"q-en-draft-ietf-add-ddr-0725a524283b98d66db922170a236dc52c5e3f13d33276ed4a133f2a7541b6f6","text":"Right now, we only allow authenticated and matching-IP opportunistic use of the DDR record. While use of the record in a case where authentication fails isn't safe to use automatically, we should clarify that implementations MAY choose to use the information anyhow by policy, and possibly with user control.\nFor authentication to matter, it generally has to operate without fallback. Currently the text is not very clear on this point, but it sure seems like IP-authenticated mode is expected to fall back to cleartext if no upgrade is possible. I think we need to clarify the use case for an authenticated mode, or abandon the distinction between opportunistic and authenticated. In my view, authenticated mode only makes sense if (1) the resolver is identified by its IP address AND (2) the system knows a priori that this resolver supports encrypted DNS, so that it can refuse to fall back to cleartext if the upgrade information is stripped by a network attacker. This is a sufficiently narrow use case that I lean toward dropping it entirely and recommending Discovery Using Resolver Names (Section 5) or opportunistic instead.\nCurrently, this is what we say (which I now see a grammatical error in, but there it is): We don't say anything else because that would be out of WG scope. The point of the document is to help a stub resolver configured with a recursive resolver known only by IP address to bootstrap into encrypted DNS. The stub resolver's posture on whether to fall back to unencrypted DNS or simply refuse to use the resolver is its own business. Look at this through a common OS scenario: the OS stub resolver learns an IP address from DHCP, and it's a public IP address (my old ISP did this). It knows of no other resolvers because the user doesn't know what DNS is and that's the default configuration (use network recommendation). If I try to use DDR and I get no answer back, then I'm in the same position as I was before DDR: I know of no other resolver, so I either have to use this one or have no DNS resolutions. DDR did not introduce that conundrum; it offers clients a way out of it when DNR is not present and there is no plain-text DNS attack present. The authenticated DDR mode allows such a client to be confident in the authenticity of the bootstrap even though it started over plain text. I do not agree this scenario is sufficiently narrow to drop. Is your concern over the usefulness of the scenario, or over naming it \"authenticated\"? I am open to suggestions on the latter; I strongly disagree with the former.\nBoth. How is this information useful to the client? I have not heard of any proposal for any client to behave differently on the basis of whether its encrypted connection is authenticated, when the resolver is identified by IP address. As for the name, RFC 7435 is the reference. I find it a bit confusing, but here's some of the relevant text: The \"authenticated\" section does not seem to describe an \"authenticated\" protocol by this definition, because it does not provide a downgrade-resistant method to determine that authentication is expected to work.\nPlease suggest a more accurate name then please. I do agree behavior naming should be consistent with other documents. Your bar for useful is apparently different from mine. Before DDR, a client configured only with an IP address had only unencrypted DNS as an option. With DDR, the client can possibly encrypt that connection. A transient attacker would have to have been present for the initial SVCB discovery to prevent that; an attacker arriving after DDR has taken place will be kept out of the loop. In the worst case, a client is returned to square one. I do not agree that just because a mechanism can be prevented means the mechanism provides no value. It doesn't have to. In the common case, the client is forced by lack of knowledge to use unencrypted DNS. If it can from that default configuration divine encrypted DNS configuration, that's a net positive. Resolver selection is out of scope and has nothing to do with it.\nI've updated to address this question by reformulating the concept of authentication, and being much more explicit about the threat model. Yes, that's a good step, but it doesn't rely on authentication. This is a very helpful observation. I've incorporated it in .\nI believe this particular issue is largely overtaken by events. Particularly, I think we've added some clarity and escape hatches with , and protections with .\nSame comment for line 337/360 about referring to the resolver as Designated versus Encrypted (it's unmodified in this PR but based on your agreement with the other in-line comment it may need modified as well) I think we will need to add some text to the Security Considerations section to mention the security promise difference between following the MAY or not (for optional use of the unverifiable designation). However, we can close this PR without it and handle that in a new PR if that's preferable."} {"_id":"q-en-draft-ietf-add-split-horizon-authority-627f45ed47c6ddbae3f149d0d326a7a51e4adb0e7562996b0cc4b5ba7888c17b","text":"Includes: Text adjustments for grammar and clarity Replacing \"public\" resolver with \"external\" Changes to references and blockquotes Made diagrams narrower\nNAME NAME Please review\nChanges look good to me."} {"_id":"q-en-draft-ietf-add-svcb-dns-ac79bccb838b23737634d4dbdcd2596729064f28710ad964c19052725a94b131","text":"This should help with the case in DDR. See URL\nMuch better. Thanks for indulging me."} {"_id":"q-en-draft-ietf-dnsop-terminology-bis-76b3d80eb93bae7e9518d2f4de98caf12c95e105552e7497a87898de9a69accf","text":"We don't seem to have defined \"lame delegation\", which is a term that is used all the time. It's in RFC 1912. Should we add it?\nYes."} {"_id":"q-en-draft-ietf-dnssd-srp-baa9775aad553f782eb5a38c9d83bac3ba055a3432cc8121a1ed9488d712ac85","text":"Update Service Discovery Instruction text to address Esko Dijk's last call comments. This fixes a bogus reference, removes a confusing double negative, and splits the text hierarchically in a way that hopefully makes more sense."} {"_id":"q-en-draft-ietf-doh-dns-over-https-755230319acc535d998474a37b05fc990d18d71249a05b110b2cd13f655ef647","text":"This text does not read very well in its current form. Since it is two ideas, two sentences is best."} {"_id":"q-en-draft-ietf-doh-dns-over-https-057e4d999fe11e53975b8319ec271085dc19959abdf3c9f3d06fdbf5b65bc650","text":"added authenticity\nThe potential for misunderstanding here is great. If we consider authenticity to be stated as a goal, we would need more explicit about the properties that this provides. Specifically, this does not provide authenticity with respect to responses, because that is the purpose of DNSSEC.\nYes, that is true. Thinking about it, I think a web application developer (one of the targets) reading this draft might not fully understand the differences between authenticity at the HTTP layer and authenticity at the DNS layer. One of the stated features of Secure Contexts is authenticated channels that guarantee data integrity. If possible, I think a few more sentences elaborating on this topic would not hurt.\nI'm going to close this because we have no idea what kind of authentication that a user would expect, whereas we know pretty well what kind of privacy they expect.\nI think there might be room for something different along these lines. DNS hijack from your configured DNS server is currently a major problem, and DoH does allow you to authenticate whether or not you are talking to your configured recursive (while not saying anything about e2e). I'm going to reopen to track.."} {"_id":"q-en-draft-ietf-doh-dns-over-https-4a09a63dead37ee60ad6ebf9f7a43cd5c25a57b3d744d236492ab7682d58354a","text":"Hello, I am a non-native English speaker but I noticed the following sentence: \"Earlier versions of HTTP are capable of conveying the semantic requirements of DOH but may result in very poor performance for many uses cases.\" Shouldn't be there at the end? Wiktionary ( URL ) suggests that plural of \"use case\" is \"use cases\", not \"uses cases\"."} {"_id":"q-en-draft-ietf-doh-dns-over-https-dada2751b2b4d329b70fbaefc2d4edea4737d9edc49358926e5a9d1ffcfec11e","text":"Fixes issue\nThe message/ tree requires that the sub-types contain other MIME messages, which DOH does not. Maybe use application/dns-message instead."} {"_id":"q-en-draft-ietf-doh-dns-over-https-2562d316148667b3e0135e65d6d1f8a48573d0c8b5be138e4f215d4482bf5ac4","text":"The \"using https:// URIs\" part is just odd. Easier to simply state that this uses HTTPS at this level. Also, if the concept of a DNS service evolves to encompass other types of exchange than simple request-response, then \"run DNS service\" won't be accurate.\nI added \"DOH\" to the title because it was now being removed from the Abstract."} {"_id":"q-en-draft-ietf-doh-dns-over-https-3ba1e6ed3ed092e6655003d08c7a9f46ea73b5eb188cb9e0466d2e6398262e26","text":"This is OK, but I like the changes in better because it brings the \"trust\" discussion further forwards. Preferences?\nI prefer this PR over - changing from trust to authorization is more on point\nI agree authorize is closer to the correct concept here but I'd like to see a definition of it (in a section of it's own early in the document). But - I don't believe there is an existing concept in DNS or DNSSEC that this is re-using? Does 'authorize' here mean that: the client will use all response from that server for resolution the client will cache all responses received from that server Also, if the trusted resolver itself isn't doing DNSSEC validation then the client's cache can still be poisoned. The only 100% sure method to stop cache poisoning is for the caching entity to perform DNSSEC validation."} {"_id":"q-en-draft-ietf-doh-dns-over-https-95f7e3440953b4d2cc40199225483ea28a72c37b01c5b52fc58a38d9c50ec164","text":"NAME I'm ok with you merging this and closing 11 if you're ok with it.\nI think that it is fine - or more precisely, invaluable - to explain what the semantics of server push are. But you don't actually require HTTP/2 for this protocol to work. The only thing that requiring HTTP/2 gets you is tighter constraints on TLS usage. You can get those same constraints with some deployment recommendations. I would support strong recommendations that said HTTP/2 is useful for its multiplexing as well as similarly strong advice about following the advice in RFC 7525. Those are both good practice.\nfairly clear feedback from singapore to change this to an endorsement (SHOULD) with better explanatory text focusing on performance and scale.\nWhat are the drawbacks of HTTP/2? Less readily available libraries? more complex protocol? Potential difficulties in integrating with current fronting stacks that may not be HTTP/2 ready? On one hand, I feel like it may be easier to just enforce 1 stack for now rather than having to support multiple flavor of client/server protocols. A suboptimal HTTP/2 implementation may just be similar to HTTP/1 but at least we can try to move forward toward multiplexing. Making HTTP/2 a should may just leave us in limbo in HTTP/1 land.\nNAME the singapore meeting is available here URL .. goto the 12 minute mark to review the discussion (wrt your question about drawbacks).\nText is in URL\nLast PR accepted. I suspect there will be a bit more wordsmithing, but this should cover the main issue well.\nthanks, I was in the room, but my notes were sparse and some of the cons I remembered of were more inconvenience to me than blockers. What I am not getting from the current spec, and which I believe NAME touched on in URL, is that it is unclear whether or not a minimum HTTP version must be supported. Based on URL , do we consider that by supporting HTTP/1.1 we get HTTP/1.0 for free and consider 0.9 not an option? Wouldn't not mandating a required version prevent client/server from having a known version to use? Can we make this clearer maybe?\nThe decision was to rely only on HTTP semantics and not versions. So any HTTP implementation that is acceptable to its peer can be used."} {"_id":"q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b","text":"I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do."} {"_id":"q-en-draft-ietf-doh-dns-over-https-a1dd61316298ea88fb6622f6e9b9d57ce6107fa3a465616b05516d1642b87af1","text":"Describe how GET (not POST) is used with server push. This will ensure that you answer the bigger question: what do you do with a server push from an arbitrary server? I assume that this won't be privileged in any way, and only the configured DNS API server will have its answers routed to the DNS stack, but that needs to be explicit.\nServer push in a request-response protocol like DNS would take a lot of designing. It isn't really needed in DNS because the server can always slap in additional answers in the Additional section of a request. I propose we close this by saying \"HTTP server push is not defined for this protocol, but might be defined in a future version\". If someone wants to define an EDNS0 or session-signal extension for it, that should be just fine.\nHTTP already has server push, so I would prefer something closer to \"this protocol does not alter the semantics of HTTP server push\".\nI think NAME is closer to what I'm thinking.. (though I don't really think that needs to be stated beyond other clarifications that this document just uses http in a particular manner, it does not define it.) additionally RR's don't quite address the same use case by the way as push can operate in more of a pub-sub fashion if you like where an additional RR is definitely a poll model.\nOf course this protocol does not alter the semantics of server push, but what does it mean for a DNS client? In the DNS protocol, the only thing even vaguely like server-initiated information is stuff in the Additional section of an answer, and even then, there are pretty strict rules about ignoring it if it is at all smelly.\nan often times misunderstood part of push is that a server pushes both a response and the request that it is responding to. (like dns, http is always a request/response pair, so if the client doesn't provide the request the server needs to). So that's what it means - its an answer to a particular question and you can cache it or not depending on whether that's a request you would have asked/trusted the peer to handle. That's pretty much the same rule for dns and traditional http clients. A client who would rather not have things pushed at them has existing protocol mechanisms to disable that (SETTINGS) which doh doesn't need to specify. I think all of this gets at what NAME was originally commenting on - you need to make a decision on whether or not you trust the server for the query/response being pushed (i.e is this your dns api server otherwise don't accept) before processing it.. this is already true for anything pushed via http but a sentence making it explicit is fine."} {"_id":"q-en-draft-ietf-emu-eap-tls13-8a00221ffe39e503ac5f718a9edca091c96e5f8c3b630ee51de096e8dd129e10","text":"Proposal to change the derivation of the EMSK and MSK individually and directly from TLS exporter. Defines new tags for this purpose.\nThanks!"} {"_id":"q-en-draft-ietf-emu-eap-tls13-b278c25d295a4bf56658e20d0f2b0dc56ca460d084fdbfd403d05722395d490a","text":"According to suggeestion in issue\nAlan has suggested that more Profiling / guidance regarding NewSessionTicket would be good. TLS 1.3 allows any number of newsessionticket to be send in any of the server flights. All ways are secure. TLS 1.3 leaves how and when to the TLS implementation.\nThis is a new request for technical change not related to the IESG DISUCUSS. Based on earlier discussions changes to the TLS layer are likely a bit controversial. John: I don't see what is special with EAP-TLS here. I also don't know what a good recommendation for the TLS layer would be or what can be enforced in TLS implementations. This would likely need interaction with TLS people,\nNote also that draft-ietf-tls-ticketrequests is in the RFC Editor's queue and goes into a bit more detail about when different numbers of tickets might be desired (as well as a mechanism to indicate the client's desires, though it's not entirely clear to me whether that will be very helpful for the EAP-TLS usage).\nBased on your earlier comment on draft-ietf-tls-ticketrequests, I already added a reference to ietf-tls-ticketrequests in version -14 \"A mechanism by which clients can specify the desired number of tickets needed for future connections is defined in [I-D.ietf-tls-ticketrequests].\"\nOops, shame on me for not checking. (Thanks for adding the reference the last time I mentioned it!)\nI think it would be fine to include a SHOULD send only one NewSessionTicket since EAP clients will typically only need one at a time.\nI am not comfortable adding such guidance without a thorough survey of libraries. As draft-ietf-tls-ticketrequests says: \"servers vend some (often hard-coded) number of tickets per connection. Some server implementations return a different default number of tickets for session resumption than for the initial connection that created the session. No static choice, whether fixed, or resumption-dependent is ideal for all situations.\" I know that openssl allows to control number of tickets with URL but I don't we think we can be sure about behavior of all libraries out there. Also some noteworthy text from draft-ietf-tls-ticketrequests: \"Especially when the client was initially authenticated with a client certificate, that session may need to be refreshed from time to time. Consequently, a server may periodically force a new connection even when the client presents a valid ticket. When that happens, it is possible that any other tickets derived from the same original session are equally invalid. A client avoids a full handshake on subsequent connections if it replaces all stored tickets with new ones obtained from the just performed full handshake. The number of tickets the server should vend for a new connection may therefore need to be larger than the number for routine resumption.\"\nHow about removing the normative SHOULD, something like: desired number of tickets needed for future connections is defined in [I-D.ietf-tls-ticketrequests]. >\nThis non-normative suggestion sounds good to me.\nJohn's suggested text looks good. By the way, I checked boringssl: URL It seems to issue 2 tickets by default (static const int kNumTickets = 2). As far as I can tell, the only way to limit to 1 ticket is by changing the code manually and re-compiling the library.\nThis is resolved in PR\nMerged PR"} {"_id":"q-en-draft-ietf-emu-eap-tls13-e70c384dea70a96bbf96d25517486c46ac9762eecb7c2f7f6c92c84896bcdbe9","text":"URL\nI added \"604800 seconds\" as suggested by Oleg. Joe suggested \"Perhaps both of these should be left to the TLS spec.\" I would agree with this.... Thanks Joe, one comment regarding item is inline below. Joseph Salowey wrote: [Oleg] Got it. Then maybe it is worth specifying 604800 seconds in the section \"5.7. Resumption\" also so it will be clear that the requirement in this section is the same strict as in \"2.1.2. Ticket Establishment\"."} {"_id":"q-en-draft-ietf-jsonpath-base-42cd38e83865dc3b5f9c3fc353fa782067308c615c4057e1557c6a29ec444ff4","text":"Fixes URL\nFrom it was stated that the current ABNF doesn't require parentheses around the expression in a filter expression. The spec thus contains examples without them. Previous discussions had occurred around making them optional, but no decision came out of them that I can recall. The closest documented conversation I could find was by NAME and the following comments which indicate that the parentheses are currently expected. To be clear, we should currently require , and disallow . The original blog post consistently uses them and never shows a filter expression selector without them.\nI don't have a strong opinion on this simplification, but one data point: The simplified grammar has been in the document since we added ABNF grammar, i.e., in draft-ietf-jsonpath-base-01 (2021-07-08). The discussion was started in , 2021-04-30. For me, the current ABNF is the status quo.. If we need to have a discussion here, let's have it, but I'm not interested in the \"we haven't discussed this enough\" angle -- this has been out in the document for 9 months. So if there are any technical arguments for requiring the parentheses, please add them here.\nI am strongly voting for the simplified, backwards compatible notation with optional parentheses.\nThe PR that put in the ABNF (which has never changed in substance with regard to requiring parentheses): URL There is even a comment from you (Greg) on that ABNF snippet in that PR conversation, so you apparently reviewed it. (Of course, you might have missed the simplification and there may be technical arguments against it -- let's hear them.)\nYes, looking back, I see point 4.1 in raised the topic, and the fact that it wasn't discussed at all (even though the original post said it should be discussed) could have been interpreted as acceptance and agreement. Also, that PR you reference is massive. It's easy to miss in the multitude of changes contained therein. I suspect this was something that was snuck in rather than agreed upon. Agreement requires discussion, and I see none. Personally, when I see , the looks as though it's part of the expression. Having the parentheses, , better isolates the expression visually. This is not a problem of parsing, but one of the human element.\nFor the record, I was originally uncomfortable with the parentheses being optional, but I remember this being discussed at a meeting quite a while ago and I was persuaded at that time that the current approach is reasonable. Perhaps that discussion wasn't highlighted in the minutes. I think we should stay as we are.\nNAME I know you already commented to the contrary, but your original post has this to say: I think this is pretty indicative that parentheses should be included as part of the required syntax. Taking this stance also aligns with our preference to accommodate existing implementations. While we say that this is a backward compatible change, it may not be. For instance, my parser won't recognize a filter selector without the parentheses. I expect many other existing implementations will be the same.\nAt today's meeting I took an action to write a comparison project test for a filter without parentheses. It turns out there is one already -- URL -- and the consensus among implementations is \"Not supported\".\nHardly indicative. Original Goessner JsonPath used JavaScript for evaluating filter expressions, I imagine that requiring the outer parentheses was just to provide a pragmatic way to delimit the JavaScript part, original Goessner JsonPath was not responsible for parsing that part. That consideration no longer applies when the syntax of filter expressions is defined in the JSONPath specification, mandatory outer parentheses no longer serve any purpose. Be real :-) The spec in its current form is incompatible to a lessor or greater extent with all existing JSONPath implementations.\nI still vote for the lean syntax with optional parentheses in filter expressions. Old queries (having parentheses) can be handled by 'old' and 'new' implementations without problems (backward compatibility). We don't break old 'query code'. New queries (no parentheses) can be treated by 'new' implementations only. This is the same behavior compared with regular expressions in filters, not being understood by old implementations. Thanks for Daniel also making that point. Am 26.04.2022 um 19:26 schrieb Daniel Parker:\nMy issue isn't that existing paths can be handled by new implementations (what we're calling \"backward comparability\"), but rather the inverse: new paths won't be handled by existing implementations (\"forward compatibility\"). Users will complain that their paths, which don't have parentheses, don't work. Implementors will respond with, \"You need parentheses,\" and then users will be trained to use parentheses anyway. As it stands, most users are already trained to use parentheses because existing implementations require them.\nRight. For new features we can't expect forward compatibility anyway. The discussion here was whether it is worth to reduce forward compatibility by admitting new syntax for existing, widely implemented features. And I grudgingly have to acknowledge that that is a different question. Actually, implementers will quickly fix that. Between competing implementations, there is a race to accept the largest amount of inputs (within reason, i.e., balancing it with the cost of implementing them). But, yes, query creators that care about widest compatibility will be trained to use the parentheses. (For me, actually, having a feature that only new implementations can understand is a feature. So the expression will break on implementations that haven't been updated. Darwin. And a nice canary.) Yes, which is part of the reason why I still think we would better give up on forward compatibility here in favor of cleaning up the query language. But that is now a matter of taste, and I have to accept that there is a valid argument to the contrary.\nI just don't think it makes sense to go down this road. Users will understand that IETF JsonPath is just another take on the idea of JSONPath, an idea that took major forks from very early on. Any Stack Overflow question on JSONPath quickly establishes that the answers are generally implementation specific.\nI'm going to stop pushing this, but I would like to add this last comment. JSON Schema has had several forward-incompatible changes over the years as well that I've been fine with. But the difference between this and those is that JSON Schema is already published and versioned. It's easy to tell people, \"As of version X, this feature works differently.\" We're not technically versioned as yet because we don't really have a proper publication from which a full implementation can be built (because we're not done). It feels different to make this change for a 1.0. But I'll follow the team's consensus.\nHave you submitted an issue?\nNo. I would add that the fact that they're different doesn't mean that they're wrong, because languages like Python, JavaScript and C use different precedence orderings, particularly Python and JavaScript, see e.g. and , , and . My comment to NAME was only about them being different.\nNAME please do; I wasn't aware they are different. (Thanks for the pointers, very useful! -- the C one is a copy of the XPath one, though, and I can't resolve the Python one.)\nAll right, .\nI'm not sure what the big fuss about the precedences being different is. For the operations we have defined, all of these are largely equivalent. Some have variation in that comparisons are higher than equality, but I'm not even sure under what circumstances that'd be useful. Contrivingly, I could say as a shorthand for but that's stretching things, IMO. I can't see that such a thing would be so useful that they'd add it into the language. NAME do you know why that split in precedence is defined?\nNAME My own, personal view, is that it's unrewarding to think about precedences, and uninteresting to spend any time at all on thinking about the implications of trying something different. That's why my personal preference is to blindly follow what somebody else has already done, and not attempt anything new. I'd like to assume that the authors of C and Python have thought about this, I don't want to. So, when I implemented this, I followed C, except where C didn't have , I followed Perl, and ended up with . If like NAME I had a strong preference that the comparison and equality operators belonged at the same level, I would have chosen another language to follow that did that, Python qualifies, and used that language as a model for all operators, except , which Python doesn't have, and gone with Perl again for that. What I wouldn't do is is help myself to Python's comparison and equality operators, but C's logical not operator. As much as possible, I would want one model to base my implementation on. Daniel\nvery helpfull indeed. IIRC ... I took my first proposal of the precedence table from JavaScript (MDN), which should follow C. So I cannot explain the fact now, why has the same precedence as . Maybe we also should follow C ... and also include an associativity column. oh, and insert following Perl. thanks\nWhy is there a difference? I don't see how the precedence chain we've declared is any different than any of these languages. They're semantically identical! groups not comparison/equality (what benefit is there to separating these?) logical and logical or All of the listed languages have this ordering. What is being discussed here?\nThe current spec is silent about how is handled in many situations: What is the result of applying the JSONPath to the JSON document ? What is the result of applying the JSONPath to the JSON document ? The implies a node set with the single value . What is the result of applying the JSONPath to the JSON document ? The implies a node set with the single value . What is the result of applying the JSONPath to the JSON document ? Presumably a node set with the single value . What is the result of applying the JSONPath to the JSON document ? Presumably a node set with the values and . What is the result of applying the JSONPath to the JSON document ? What is the result of applying the JSONPath to the JSON document ? What is the result of applying the JSONPath to the JSON document ? The implies a node set with the single value .\nTo 1.: Similar is URL To 6.: URL To 7.: URL\nI have historically handles as a valid value (contrary to most of .net) separate from \"undefined\" or \"missing.\" As a result, I agree with these points. Specifically for [7], this is an existence test. Since exists as a property in the object, that object would be returned. I believe we decided that existence testing does not consider falsiness."} {"_id":"q-en-draft-ietf-jsonpath-base-c15d2df8fdccc0f612d49f101367239cfaa6952ce8d89c159853ef5d2b55ae32","text":"The index selector (but not the index wildcard selector) selects no elements from non-arrays. See URL for background.\nThis seems implicit since JSON objects cannot be numerically accessed.\nYes, but making it explicit is clearer.\nThis looks okay, and it aligns the text with the previous paragraph."} {"_id":"q-en-draft-ietf-jsonpath-base-6a45c603fc80d10d2087f41fce2471d6e793966c7c46c2f8f9d21df701bf7bb9","text":"And: allow normal string comparison. use some phrases consistently (editorial change only). (Reviewers may like to view .) The options under consideration in issue were: . . . The NaB proposal, but with left-to-right evaluation and short-circuiting. The option of forcing a total order among all values, like Erlang does. The option of not selecting the current item if any type-mismatched comparisons occur anywhere in the expression. This is option 2 with string comparisons. Fixes URL\n(Do you mean beyond-BMP characters?) You do not need to encode beyond-BMP characters, but if you do, you indeed go through UTF-16 surrogate pairs. No, and that is true of other backslash-encoded characters, too. You can sort by Unicode Scalar Values. You can also sort by UTF-32 code units (obviously) and by UTF-8 code units (a.k.a. bytes) — UTF-8 was careful to preserve sorting order. Grüße, Carsten\nMerging. We can fine-tune in a follow-on PR.\nWhat is the expected behavior for expressions which are not semantically valid? For example, is only defined for numbers. So what if someone does Is this a parse error? Does it just evaluate to for all potential values? This kind of thing will be important to define if/when we decide to support more complex expressions (e.g. with mathematic operators) or inline JSON values such as objects or arrays.\nHere you can see the current behaviour of the various implementations for a similar case: URL\nThanks NAME But for this issue, I'd like to explore what we want for the specification. Also, for some context, I'm updating my library to operate in a \"specification compliant\" mode by default. Optionally, users will be able to configure for extended support like math operations and inline JSON (as I mentioned above). But until that stuff is actually included in the spec, I'd like it turned off by default.\nThe spec says: and the grammar in the spec includes as syntactically valid. (We could tighten up the grammar to exclude cases where non-numeric literals appear in ordered comparisons, but I don't think there's a great benefit in doing so.) The spec also says: So, evaluates to \"false\".\nNAME You're quite right. I noticed that the other day and fixed it, but I quoted the previous spec before the fix was merged. Apologies. Please see the revised wording in the filter .\nI don't understand numeric only semantic constraint. Is there a reason behind it? JSON has very few value types and lexical ordering of strings is well defined in Unicode. Standards like ISO8601, commonly used for representing date-time values in JSON, give consideration to lexical ordering. So a useful filter for me could be to return JSON objects that have a date greater than (or equal) to a date taken from somewhere else in the same instance. Having the comparison evaluate to false could also be confusing. For , what should yield for ? Should it be true? If so such an expression would return which is a non-numeric result even though a numeric comparison was performed. Or should breaking the semantic rule about numerics cause the entire predicate to yield false, regardless?\nI'd like to be sure that we keep to the topic here and not get distracted by my choice of example. Yes, comparisons with strings is widely well-defined, but this issue is about comparisons that don't align with what's in the spec currently. Currently, as defined by the spec, strings are not comparable. Maybe my question could be better served with the boolean example mentioned by NAME What happens in this case?\nIt's hard to tell from . But consider something that is stated in the draft, \"Note that comparisons between structured values, even if the values are equal, produce a \"false\" comparison result.\" So, the draft considers comparisons of structured values to be well formed, but always evaluates the comparison to false. Consequently, given element-for-element equal arrays and , and both evaluate to , and so presumably and both evaluate to . This suggests to me that the draft's authors intend comparisons that may be regarded as \"invalid\" to evaluate to . Would the draft also have gives and gives ? Regardless, the draft's rules for comparing structured values appear to be incompatible with all existing JSONPath implementations, no existing implementation follows these rules, as evidenced in the Comparisons by and . None appear to evaluate both and to . These include both implementations in which the expression is well defined, and implementations in which the expression is a type error. For example, the Javascript implementations evaluate as , and as , because Javascript is comparing references to arrays (rather than values.) Implementations in which the expression is a type error either report an error or return an empty result list. I don't think you'd find precedence for these rules in any comparable query language, whether JSONPath, JMESPath, JSONAta, or XPATH 3.1. There is priori experience for defining comparisons for all JSON values (see e.g. JAVA Jayway), just as there is priori experience for treating some comparisons as type errors and taking those terms out of the evaluation (see e.g. JMESPath.) But the draft's approach to comparisons is different. It does not seem to follow any prior experience.\nWhat the says about comes down to the semantics of the comparison . The Filter Selector says: Since is non-numeric, the comparison produces a \"false\" result. Therefore, always results in an empty nodelist.\nBut the spec doesn't say that the result is , and it's incorrect to assume this behavior. It's equally valid to assume that such a comparison is a syntax error. This is the crux of this issue.\nThe spec describes comparisons as boolean expressions which therefore have the value true or false. The difficulty of using the terms and in that context would be possible confusion with the JSON values and . The spec is currently inconsistent in describing the results of boolean expressions as true/false or \"true\"/\"false\", but the fix there is to be consistent. As for for the syntax side, the spec's grammar implies that is syntactically valid.\nI feel it would be better to be explicit.\nAnd always returns a non-empty result list. As does . If the idea is that expressions involving unsupported constructions should return an empty result list, than none of the above expressions should return anything. There is prior experience for matching nothing in all cases, e.g. in JMESPath, both and , where the greater-than comparison isn't supported, match nothing. There is also prior experience in the JSONPath Comparisons where the presence of unsupported comparisons always results in an empty list. But the approach taken in the draft appears to be a minority of one. There appears to be no prior experience for it.\nI don't think we can be any more explicit than the following productions: What did you have in mind?\nI had a look and couldn't find these tests. Please could you provide a link or links.\nIt depends on what we want. If we say is syntactically valid, then we need to explicitly define that it (and NAME variants, etc) evaluate to (boolean, not JSON) because they are nonsensical (semantically invalid). But if we say that these expressions are syntactically invalid, then we need to update the ABNF to reflect that by splitting and so that we can properly restrict the syntax.\nEven if we made syntactically invalid, the same expression could arise from, for example, where is . So, since the spec needs to be clear about the semantics of such expressions, I don't really see the benefit of ruling some of them out syntactically if that would make the grammar messier.\nThen we need to be explicitly clear that such cases result in a evaluation. We don't currently have that.\nSee, for example, , , and . Bash (URL), Elixir (jaxon), and Ruby (jsonpath) all return empty result lists () for $[?(NAME (NAME == true))] $[?((NAME (NAME == true))] and $[?(!(NAME (NAME == true))] On the other hand, I don't think you could find any cases where evaluated to while evaluated to . Not in JSONPath, nor in any of the other good query languages like , , or . That is unique to the draft.\nOne solution would be to introduce \"not a boolean\" () analogous to \"not a number\" (). Semantically invalid boolean expressions, such as would yield . would obey the following laws, where is any boolean or : (The logical operator laws in the spec continue to hold, but only when none of , , or are .) The rule in the spec for filters would still apply: In other words, a semantically invalid boolean expression anywhere inside a filter's boolean expression causes the filter to not to select the current node. (Edited based on feedback from NAME What do you make of this solution? /cc NAME NAME\nThat would mean an implementation can no longer short-circuit on b && c where b is false. I don’t think there is much point in getting an “exception” semantics for unsupported comparisons. If we do want to do them, they should be actual exception semantics. But the loss of short-circuiting is more important to me than the weirdness of returning false (true) in certain cases. Grüße, Carsten\nYea, good point. I wonder if there is a consistent set of laws which allow both true and to be annihilators for OR and allow both false and to be annihilators for AND? That would at least preserve short-circuiting, but the overall effect would be more difficult to describe as wouldn't then trump everything else.\nI'm attracted by the \"consistency with JMESPath\" argument. I.e. the strings and can replace each other with no change in effect. I thought that's what the draft said. I'd be sympathetic with making the literal string non-well-formed but as Glyn points out, that wouldn't make the need for clarity on the issue go away.\nRight. We might even point out that constructs like this could lead to warnings where the API provides a way to do this. Grüße, Carsten\nYes, the behaviour of the current draft for semantically invalid expressions seems to be the same as that of JMESPath after all. The says invalid comparisons (such as ordering operators applied to non-numbers) yield a value which is then treated equivalently to . I thought I'd verify this to be on the safe side. A with the expression evaluated against the JSON: gives the result: whereas gives the result: The evaluator with the expression applied to the same JSON gives the result: whereas the expression gives the result .\nEDIT: I think you may be right about this after all. Just not with your example, I believe you are testing with an implementation that does string comparisons differently from the spec (if you're using the online evaluator on the JMESPath site.) Yes Actually, the JMESPath implementation you're using looks like the Python version accessed through the . As the JMESPath author, James Saryerwinnie noted in , \"What's going on here is that the spec only defines comparisons as valid on numbers (URL). In 0.9.1, validation was added to enforce this requirement, so things that implicitly worked before now no longer work ... Given this is affecting multiple people I'll add back the support for strings now.\" James Saryerwinnie went on the suggest that he was going to update the spec as well to allow more general comparisons, but that didn't happen. James actually took a break from JMESPath at around that time to quite recently. Yes. Here it's actually comparing \"char\" < \"char\", and returning , not . and gives However, I did the following test with JMESPath.Net with gives gives which I think is consistent with the current draft.\nLet's bring this back home. I think we've decided that is not a syntax error. So now we are deciding how it evaluates. The leading proposal is that all of these evaluate to false: - More holistically, any expression which contains an invalid operation evaluates to false. This would imply that an invalid comparison itself does not result in a boolean directly. Rather (as suggested by NAME an implementation would need an internal placeholder (e.g. or ) that is persistent through all subsequent operations and ultimately is coerced to false so that the node is not selected. This would mean that the processing of each of the above expressions is as follows: - observes and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false - observes and evaluates to observes the and evaluates to is coerced to false At best, the use of something like is guidance for an implementation. The specification need only declare that expressions which contain invalid operations (at all) evaluate to false. How a developer accomplishes that is up to them.\nAbort evaluation and report an error (the vast majority of existing JSONPath implementations) Abort evaluation and return an empty list (a few existing JSONPath implementations) Continue evaluation while excluding the element affected by the dynamic error (JMESPath specification) All of that sounds complicated and unnecessary. Personally, I don't think any of these constructions are necessary. It is enough to say that in these situations a dynamic error has been detected, and the implementation must react in this way (whatever is decided upon.) As you say \"How a developer accomplishes that is up to them.\"\nUmmmm… I was under the impression that was identical to and thus your third item would be true. This seems violently counter-intuitive. We're not going to treat as a syntax error, but we are going to cause its presence to have a nonlocal effect on the entire containing expression. OK, it's back to a careful read of the draft for me.\nIf you want a nonlocal effect, it is probably best to think in terms of a throw/catch model. (And, yes, I think one needs catch as an explicit mechanism, not just implicitly at the outside of a filter expression.)\nIf that were the case, then would be true, which isn't right. The expression is still unevaluateable. Similarly with the expression. True in general, but for this specific case, it's the combined with that gives it away, and this can be detected at parse time. Regardless, we need to also consider the case where is true.\nI think the crucial thing is that these expressions have well defined semantics so that we can test the behaviour and ensure interoperation. I really don't think we should be encouraging the use of these expressions. My suffers from the lack of short-cutting. If we preserved short-cutting with a similar approach, we'd have to give up commutativity of and and probably some of the other boolean algebra laws. Overall, I think such approaches would make the spec larger and consequently increase its cognitive load for users and implementers alike. Producing a non-local effect is another solution, but I think that too is overly complex for the benefits. If we made the non-local effect that the filter selected no nodes, I think that would be semantically equivalent to my proposal. Also, I don't think we should introduce a try/catch mechanism as the spec complexity would far outweigh the benefits. The current spec: is quite compact in the way it handles the current issue preserves the boolean algebra laws has a precedent in the JMESPath spec. So I propose that we stick with the current spec.\nBut what's written in the spec doesn't resolve the vs. disparity. Both of these expressions should return an empty node list because both are semantically invalid. As proof of this, is logically equivalent to , which also returns the empty nodelist. If simply and immediately returns , then this equivalent no longer holds.\nI wish we could stop using the term \"semantically invalid\", because that doesn't say what the construct means. A lightweight variant of the catch/throw approach I would prefer would be to say that an expression fails. Constructs that have failing subexpressions also fail. A failing filter expression does not select the item for which the filter expression failed. Disadvantage: Either no short circuiting, or ambiguity with short circuiting. Without short circuiting, one cannot test whether a subsequent subexpression will fail and still get a deterministic result. (I'm not sure we have all we need to make such tests.)\nURL makes the current spec clearer: yields false and so yields true and yield false. Where this is a \"disparity\" or not depends on what semantics we are aiming at. \"should\" only if we adopt the NaB (or equivalent) or non-local effect semantics. (I agree with NAME that the term \"semantically invalid expression\" isn't very useful when we are trying to define the semantics of such expressions.) URL defines in terms of to preserve this very equivalence.\nI think these disadvantages should steer us away from catch/throw, or equivalent, approaches.\nI didn't follow any of this comment. I can't tell what you're saying the spec says vs what you think it should say. Certainly, you're not saying that should be evaluated to true, which would return ALL of the nodes, right? Because that's what it sounds like you're arguing for. Comparisons with booleans don't make sense. Ever. My proof above shows the absurdity of defining one expression to evaluate to false because it, by necessity, means that the converse evaluates to true when neither make sense at all. Call it what you want. I call it \"semantically invalid\" because that phrase accurately describes that there are no (common) operative semantics that make this expression evaluatable. Sure, if we define it to mean something, then that technically gives it semantic meaning, but in all ordinary cases of how works, this operation doesn't make sense and therefore is, by definition, semantically invalid. There should be one of two outcomes from this issue: It's a parse error. It's a runtime evaluation error. In both cases it's an error; that's not in question here. What kind of error and how it's handled are in question. We've already seen that it doesn't make sense to be a parse error because we need to account for the boolean value coming from the data (i.e. from ). That leaves us with a runtime evaluation error. Whenever a runtime evaluation error occurs, because we've decided that a syntactically valid path always returns, it MUST return an empty nodelist. Therefore, ANY expression that triggers an evaluation error MUST return an empty nodelist. This includes both and . I don't know how to stress this any more. There are no other logical options to resolve this issue.\nThis reasoning technique is also called proof by lack of imagination (related to the technique \"I don't get out much\"). I hear that you are arguing for some exception semantics (similar to the catch/throw I brought up). I can sympathize with that, because I also argued for that. However, we are really free to define the expression language as we want, and what we have now is not that much worse than what I was (incompletely) proposing.\n(Found the rationale why Erlang does things this way: URL Unfortunately, we can't ask Joe for additional details any more. But apparently these titans in language design for highly reliable systems saw a benefit in a complete total ordering of the type system. JSON does not have a type system, so anything we do will be invention.)\nMy comment was about the current spec, especially now that PR had landed. The current spec says that yields true. Yes (unless that comparison was part of a large boolean expression which sometimes yielded false) that would return all the nodes. That's precisely what I am arguing for as I think it's the best option. I suspect absurdity, like beauty, is in the eye of the beholder. I can see where you're coming from. I don't follow the chain of reasoning there. The current spec returns, but doesn't always return an empty nodelist. At the risk of repeating myself, the current spec is a logical option in the sense that it is well defined and consistent. I think we are all agreed that none of the options is perfect. All the options under consideration are logical, otherwise we'd rule them out. Some would take more words in the spec to explain and that puts a cognitive load on the user for very little benefit.\nNAME It seems your preferred approach is: Let's try to flesh this out a bit. What about shortcutting? For example, what should the behaviour of be? There would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow shortcuts. Result: all nodes are selected. Do not prescribe an order of evaluation and allow shortcuts. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow shortcuts (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nThis seems useful to me. The spec is free to specify that inequalities (, >=) are only for numbers. Short-circuit evaluation can be used to avoid errors, e.g.: will return the first array element.\nYou still have to parse this. And you have to know the value of before you evaluate it. That means you're able to know that this expression is nonsensical (to avoid \"invalid\") before you evaluate it. Thus shortcutting is moot.\nLet's use \"undesirable\" for expressions we don't like. The example that Glyn provided is just a stand-in for a more complex example that gets true from the JSON value. So the fact that you can parse this and find the undesirable part immediately is not so relevant. With shortcut semantics, the fact that the ignored part of the disjunction is undesirable also is irrelevant. So I do think that Glyn's question is valid.\nIn JMESPath, evaluates to, not , but \"absent\", represented in JMESPath by . When \"absent\" () is evaluated as a boolean according to the JMESPath rules of truth, it gives . It seems consistent that \"not absent\" is . What would you have if you resolved to the draft's notion of \"absent\", an empty node list?\nOk, let's try a modified example to avoid the syntactic issue. What should the behaviour of be (where results in a nodelist with a single node with value )? With NAME preferred approach, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected.\nI'd like to go with the simplest possible thing unless are strong reasons not to. I think the simplest possible thing is: Type-incompatible comparisons yield false and there are no \"exceptions\" or nonlocal effects. So in this example you unambiguously get and select all the nodes and if you want to optimize by short-circuiting you can do that without fear of confusion or breakage. Type-incompatibility yielding false has the advantage that it's easy to explain in English and, I would argue, usually produces the desired effect. If I am filtering something on and turns out not to be a number - almost certainly an unexpected-data-shape problem - I do not want to match. But if I that with something else then I'm explicitly OK with not getting a match here. Yes, there are still corner-case surprises, for example: is true if is non-numeric, but is not true. I claim that this is at least easy to explain and understand in brief clear language.\nA better choice for your argument would have been JS as that was one of the original implementations. Similiar behavior works there (as I discovered testing it in my browser's console). ! and result in false and result in true Interestingly and both result in true for JS. This makes it very apparent that JS is just casting the to and performing the comparison. This kind of implicit casting is something that we have already explicitly decided against. This means that even the JS evaluation isn't a valid argument. I could just as easily use a .Net language or any other strongly typed language to show that doesn't even compile and is therefore nonsensical. Why do we continually compare with this? It's a deviated derivative of JSON Path. Are we trying to align JSON Path with it? To what end? I thought we were trying to define JSON Path, not redefine JMESPath to look like JSON Path. \"... develop a standards-track JSONPath specification that is technically sound and complete, based on the common semantics and other aspects of existing implementations. Where there are differences, the working group will analyze those differences and make choices that rough consensus considers technically best, with an aim toward minimizing disruption among the different JSONPath implementations.\" Yet no one has thought to see what the implementations do in these cases. I would say that the short-cutting is likely preferred, but I don't think my implementation would do it that way. My implementation would evaluate all operands then apply the . This would result in the strange comparison with and so it would select no nodes. C# would shortcut this. and would be considered \"side-effect\" evaluations and would be skipped. That said, strong typing would guarantee that these would evaluate to types that are valid for , so the resolved expression is guaranteed to make sense in the event shortcutting didn't happen. (In reality, the C# compiler would optimize to a constant , and ultimately the remaining expression wouldn't even make it into the compiled code, so really there's no short-cutting either.)\nWell, I chose Erlang because their total ordering is the result of a deliberate design process. Any ordering that you can derive in JavaScript is the result of a haphazard, now universally despised system of implicit conversions — not quite as good an example for my argument as Erlang’s careful design. (Some people appear to have a tendency to derive processing rules for JSON from JavaScript. JSON is an interchange format, which derived its syntax from JavaScript. JSON has no processing rules, and so we have to add some. Any inspiration for the ones we decide to use in JSONPath is as good as any other, as long as it meets the needs of its users.) Grüße, Carsten\nFor the record, unlike Erland, comparisons in our current draft do not form a (a.k.a. a linear order). A total order satisfies this law: But, according to our current draft, both and are false. Our current draft does, however, provide a since satisfies these laws: That said, in our draft is not a strict partial order which would have to satisfy these laws: In our draft: is false, so is true, which breaks irreflexivity. both and are false, so and are true, which breaks asymmetry. Our current draft does however preserves the laws of boolean algebra (some of which would be broken by the alternatives being discussed in this issue).\nAnd typed comparisons in C# are the result of a deliberate design process. What's your point? You're invoking a selection bias. Our decision to not have type casting (i.e. ) demonstrates that types are important to us. That line of thinking necessitates that any comparison between different types must either error or return false because such a comparison is meaningless. We're not erroring, so we must return false.\nAre we now agreed then?\nIf by in agreement you mean that must also return false.\nAccording to the current draft, returns false and so returns true and thus returns false.\nOkay... You're twisting my examples. I want any expression that contains comparisons between types to return an empty nodelist.\nI want all items that are not back-ordered or backordered for less than 10 days.\nThe simplest possible thing would be to do what the vast majority of JSONPath implementations do when a comparison happens that is regarded as not supported, which is to abort evaluation and report a diagnostic. That has the additional advantage of being compatible with the charter \"capturing the common semantics of existing implementations\", if that still matters. It is also consistent with XPATH 3.1, see (URL) and (URL). The next simplest thing would be to abort evaluation and return an empty list (a few existing JSONPath implementations do that.)\nSpeaking with my co-chair hat on, I'd like to draw attention to the following text from section 3.1 of the current draft: \"The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to; no further errors can be raised during application of the query to a value.\" I think this has for a long time represented the consensus of the WG. If someone wants to approach the problems raised in this issue by proposing an exception mechanism, that would require a specific proposal covering the details and how best to specify it. Absent such a proposal existing and getting consensus, I think approaches that include a run-time error/exception mechanism are out of bounds.\nCo-chair hat off: I could live with that. Among other things, it's easy to describe. It's not my favorite approach but it's sensible.\nOn 2022-07-20, at 22:15, Tim Bray NAME wrote: Please don’t commingle exceptions with erroring out. Erroring out because of data fed to the expression is not consistent with the above invariant. Exceptions may be processed within the query (e.g., in the classical catch/throw form), and need not violate that invariant. “NaB” is an attempt to add to the data types in such a way that an exception can be handed up the evaluation tree as a return value. Exceptions tend to make the outcome of the query dependent of the sequence in which parts of the query expression are processed, so they may be violating other invariants (which are implicit in some of our minds). Grüße, Carsten\nI've been thinking why I keep liking having type-mismatch comparisons be just and made some progress. I think that is a compact way of saying . So if it's not a number, this is unsurprisingly false. If you believe this then it makes perfect sense that if is then is true but is false.\nNAME If is , then the spec's statement: implies that is false. Then the spec's statement: implies that , the negation of , is true. Essentially, and give non-intuitive results for non-numeric comparisons. This is surprising and far from ideal. However, because comparisons always produce a boolean value, they can be hedged around with other predicates to get the desired result, e.g. we can replace with which is equivalent to for numeric and is false for non-numeric .\nThe approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison means that: it is not possible to hedge around \"undesirable\" comparisons with other predicates, results tend to depend on the order in which subexpressions are evaluated. Let's take an example. Suppose we want to pick out all objects in an array such that the object has a key which is either at least 9 or equal to . For example, in the JSON: With the current spec, the filter has the desired effect and returns the nodelist: The alternative doesn't allow that kind of hedging around. A solution with the alternative approach is to use a list .\nNext, let's explore the ordering issue with the alternative approach. What should the behaviour of be (where results in a nodelist with a single node with value )? (Apologies this is for the third time of asking.) With the approach of forcing the filter expression to return an empty nodelist whenever it contains an \"undesirable\" comparison, there would appear to be three options: Prescribe an order of evaluation such as \"left to right\" and allow short-circuits. Result: all nodes are selected. Do not prescribe an order of evaluation and allow short-circuits. Result: non-determinism - all or no nodes are selected, depending on the implementation. (That's bad for testing and interop.) Disallow short-circuits (in which case the order of evaluation doesn't matter). Result: no nodes are selected, but the implementation cannot take advantage of the short-circuit as an optimisation. I think option 1 is preferable and probably the most intuitive option. I wonder if there are any other issues (apart from being rather prescriptive) with that option? The boolean algebra laws would only apply in general when \"undesirable\" comparisons are not present.\nI suggest changing this to \"…using one of the operators , =. Then… I suggest removing this statement because we've defined , then there's some work to tidy up the definition of == and !=. And in every case, any comparison with a type mismatch is always false.\nActually, with a type mismatch is always true. I was hoping to extend the definition by negation to and , but that seems to create cognitive dissonances.\nIf you say that type mismatch always yields false, there's no problem, this reduces to true false, which is to say true. You don't need to say anything about order of evaluation. I think if the spec needs to specify order of evaluation, that is a very severe code smell. I'd find that very hard to accept.\nNo, the rule I'm proposing is simpler: Any comparison with a type mismatch is always false. Which means is not specified as \"the opposite of ==\" it's specified as meaning \"are values of the same non-structured type, and the values are not equal\". I can't tell wither and are equal or not if is not a boolean. They are not comparable.\nSo is always false. Not intuitive to me.\nHmm, having written that, I might have been going overboard. It would be perfectly OK to define as \"true if the operands are of different types, or of the same type but not equal\" and that would make perfect sense.\nWe also need to design in an escape clause for structured values, which so far we don't compare.\nOK, I think I have probably said enough on this, but I have been doing an unsatisfactory job communicating what seems to me like a reasonably straightforward proposal. Once the spec stabilizes a bit I'd be happy to do a PR. The key language, under \"Comparisons\" in 3.4.8, would be: True if both operands are of the same type, that type is not a structured type, and the values are equal; otherwise false True if either operand is of a structured type, or if the operands are of different types, or if they are of the same type but the values are unequal; otherwise false. True if both operands are numbers and the first operand is strictly greater than the second; otherwise false. True if both operands are numbers and the first operand is strictly less than the second; otherwise false. True if both operands are numbers and the first operand is greater than or equal to the second; otherwise false. True if both operands are numbers and the first operand is less than or equal to the second; otherwise false.\nThat approach, like the current draft, preserves the laws of boolean algebra and therefore the order of evaluation doesn't matter. It gets rid of some nasty surprises (such as ) which are present in the current draft. It seems that is the negation of , in which case it's probably simpler to define it as such. However, the approach is not completely free of surprises because it breaks: the converse relationship between and that if and only if not the converse relationship between and that if and only if not as well as: the reflexivity law (see above, e.g. is false) the strongly connected or total law (see above, e.g. it is not true that either or since both are false). Thus and would no longer be partial orders and and would no longer be strict total orders when these four operators are considered as binary relations over the whole set of values. We could rationalise this by thinking of these four operators as orderings of numbers with non-numbers added as \"unordered\" extensions.\nEh, I like writing each one out in clear English rather than having them depend on each other. But, editorial choice. It's still true if a & b are both numbers. If you accept that \"<\" means \"A is a number AND B is a number AND A I am very strongly against applying /// to anything but numbers and strings. All of glyn's proposals (1)-(5) define /// for all kinds of values. In which case you might want a diagnostic, like . Or the vast majority of existing JSONPath implementations.\nOn 2022-07-26, at 14:44, Daniel Parker NAME wrote: OK, so the new consensus is that we do want JSONPath to be able to break? (Not just syntax errors before the query meets data, but also data errors?) Grüße, Carsten\nIn the case of XPath 3.1, errors are raised or not, depending on a mode. Please let's avoid that. I personally don't think there is a convincing case for raising data errors. I would be open to raising optional, implementation-dependendant warnings. But I'm also comfortable with continuing the current design point of raising syntax errors based on the path alone and then absorbing any kind of anomaly in the data without failing the request.\nNAME Not even if, in Tim's words, \"Something is broken\"? Seriously broken? Proposals (1)-(5) all perform a common purpose: to take the comparisons that are less sensible, and resolve them to something, for completeness. Something is returned, and for all the proposals, that may be a non-empty result list, depending on the query. But consider a user playing around with an online query tool. Which result is more helpful? The result or the diagnostic The expression either side of operator \">\" must evaluate to numeric or string values\nI agree that diagnostics are great for attended situations, but for unattended scenarios, I'd prefer the behaviour to be robust.\nNAME For \"unattended scenarios\" we have error logs and alerts.\nYes, I'm comfortable with logging as a side effect of a robust operation (no failure after syntax checking the path).\nCo-chair hat off: While I do not have a strong opinion about diagnostics vs exceptions vs silent-false, I do note that this kind of type-mismatch problem is a corner case and I wonder if it justifies the investment of a whole bunch of WG effort. Co-chair hat back on: As I said, I think the current draft does represent a fairly long-held consensus of the WG. If we want to introduce a diagnostic/exception mechanism, it's not going to happen without a PR to make the option concrete as opposed to abstract. So if you want such a thing, arguing for it in this thread is not going to get us there.\nSo, one imagines something like: \"A JSONPath implementation MUST NOT cause a failure or exception as the consequence of applying a well-formed JSONPath to well-formed JSON data. An implementation SHOULD provide a mechanism whereby software can receive report of run-time errors such as type-incompatible comparisons.\" Hmm, I don't see anything in the draft spec about what happens if you try to apply a JSONPath and what you're trying to apply it to is broken, i.e. not valid JSON. Maybe we're in a world where callers have to live with failed applications anyhow?\nI'm not \"arguing for it\", I don't care what the authors of the draft eventually put in it. I'm simply providing comments or feedback on matters that may be helpful to the authors in thinking about the issues, or not. They can decide. There used to be more people doing that, but my impression is that most of them have dropped off.\nI think that deserves its own issue: URL\nI'd like to address this issue, so, unless someone someone objects strongly, I'll put together a PR for option 2 plus string comparisons. That won't necessarily be the end of the discussion, but at least it will give us a concrete alternative to the current draft.\nPlease do, and thanks in advance. We will do better if we have concrete texts before us to shape our discussion.\nOn 2022-07-19, at 15:55, Glyn Normington NAME wrote: The other reason that would speak against an option is that it lacks expressiveness, i.e., it is impossible to express a query that can be expected to be needed. I am specifically thinking about queries that first look at a data item to see what it is and then make a condition based on what was found. Grüße, Carsten\nOn 2022-07-20, at 22:23, Tim Bray NAME wrote: … which reminds us that this cannot currently be explicitly expressed. Grüße, Carsten\nI think there may be room for further fine-tuning but at this point I'd like to merge this and get a nice editors' draft that we can read end-to-end."} {"_id":"q-en-draft-ietf-jsonpath-base-446f74b2169db2ee17fdd4c587672d5f22bce43197468cac09a80e3072774c21","text":"This is a draft for us to contemplate and update to see if we can reach a satisfactory clarification.\nThinking about the discussion in and the current definitions ... leads me to the question, why object members can't be nodes. Consider a subtile modification of definitions: Member: A name/value pair in an object. Element: An index/value pair in an array. Node: A pair of a value along with its nodename (member name or element index). Root Node: The unique node with the name \"$\" whose value is the entire argument. Location: The unique path from the root node to a single node, manifested as a normalized path. This is an alternative node representation, i.e, in nodelists. This way confusing formulations like become obsolete. We also get a more simplified definition for children and anchestors equivalent to child nodes and anchestor nodes. It seems to be consistent to that . Is there something I am overlooking?\nWe could do this, but that would mean our data model no longer maps to JSON, which doesn't have free-floating members (or elements in the above definition). Also, from a practical point of view, one usually wants to look at a node in the sense we currently have it, not as a combination of the edge leading into it and the vertex itself (in graph theory, node also is usually a synonym to a vertex, not that combination). (I think we already got rid of the confusing wordings.)\nThat definition of node would not uniquely identify a value in the JSON tree. For instance, in the JSON: the above definition of node says there are two nodes with the value 1 and the nodename 0. This isn't a node, according to the above definition, because it does not have a nodename. In the above JSON example, the Normalized Paths and both point at a single node (with value 1 and the nodename 0). Also, following on from NAME comment, the whole point of JSONPath is to select/extract JSON values from a JSON value, so that rules out selecting/extracting object members, since these are not JSON values.\nI am not quite convinced yet. Hmm ... where is it specified, that a node definition must be unique. Nodes in an XML DOM tree aren't either. Object members have an explicit nodename. Array elements and the root node have implicit node names (index and '$'). With our current definition members/elements and nodes are seemingly total unrelated. So maybe we can get completely rid of the term 'node' with its abstract definition then.\nThat was a somewhat simplified statement. We do have a use for normalized paths, which identify nodes.\nThe consequences of a node not being unique in a tree include: Outputting a nodelist from a query would not be so useful because it would be missing location information. Converting a nodelist into an array of Normalized Paths would be tricky as the nodelist wouldn't define the order of duplicate entries. A node could have multiple parents, so the tree would't be a tree any more, it would be a directed acyclic graph. Not sure I follow. Object members are not values, do not have nodes, and so don't need a nodename. I don't think that's true. Each member has a value with a node. Each element has a node. Perhaps the (\"JSON Values as Trees of Nodes\") in PR makes things clearer? I don't think that's possible without introducing a synonym for \"node\".\nI think this should be clear now in the draft -- can we close this?\n2023-01-10 Interim: Yes, we can close."} {"_id":"q-en-draft-ietf-jsonpath-base-9a65d1a6def3f300af175053bdf78a734985aed3d5d9eb11171aa9c29a989d5f","text":"Avoid the term indexing, so we don't need to worry about whether it applies only to arrays or to arrays and objects."} {"_id":"q-en-draft-ietf-jsonpath-base-41d62993f8ed991ba32b456fadeff2d76a88466ef52465dab53c961a18b349c7","text":"I like this structure. We might come up with more validity requirements, so we may want to keep this even if =~ goes away (which is pretty much what we decided last week)."} {"_id":"q-en-draft-ietf-jsonpath-base-e1b7da61fe3eef07050d2e352b24ad69dea159b02e8bb021a37218cc58493647","text":"Add pair of surrogate escapes to note on string format.\nLGTM, thanks. Just one editorial suggestion."} {"_id":"q-en-draft-ietf-jsonpath-base-61bbd758f12b22e8f8b28378dd087b0c050ff7cc3b9001e4abb93f3f559c209f","text":"E.g.,\nI propose a slightly more compact naming of ABNF of - and -segment, which should be more consistent between them two. is well known from LaTeX and means or there.\nabove is wrong (and so it is in the current spec). It needs to read ... to not allow triple dot ( :-)\nNAME Wouldn't dotdot-wildcard etc. be less cryptic?\nHere are some quibbles with the current ABNF (prior to PR being merged) which we can sort out after The Great Renaming:\nYes, yes. See. Fixed in I like the way this looks like now. Maybe I don't understand; can you elaborate? Fixed this (and following) in Removed in To be fixed in overall overhaul, next Yes! To be fixed in overall overhaul, next To be fixed in overall overhaul, next Fixed in Discuss this in its own issue, I'd say. Fixed in It certainly does not -- it is the subset that is actually needed for normalized paths. To stay normalized, we don't allow random other ones.\nOf course ... I'm quite dispassionate here.\nPlease see my proposal in\nWith merged, we still have the expression cleanup the separate issue (enhancement) whether should be allowed, to mean\nMy remaining points are covered by PRs , , and .\nShouldn't be named according to all other above ?\nYes, I agree. See URL\nNAME Are we ready to close this issue now the PRs have been merged or did you have more expression cleanup in mind?\nThe only thing that comes to my mind is standardizing on the column where the is -- this is currently rather chaotic. But that can be a new issue.\nIndentation of : number of lines such indented\n(And it doesn't need to be consistent, just a bit less chaotic.)\nLet's make the a new issue. (I haven't got a good feel for the optimal solution.)\nBefore we close this issue, URL slipped through the net in my earlier PRs.\nThen could resolve to either or . We would introduce an ambiguity, we already found with Json Pointer () ...\nOK, with the latest blank space realignments that are part of (5e2cc85), I think we are done with this.\nThank you! ... or do we need two selectors here (Ch.2.61) child-longhand = \"[\" S selector 1(S \",\" S selector) S \"]\" given that '1' means 'at least one'."} {"_id":"q-en-draft-ietf-jsonpath-base-13989b8055ffbbd9d8d0ccd4876b7ca5445153dc2c4882e0a86f65e9a8ec79be","text":"Thanks to NAME for spotting this.\nNAME I presume you are ok with this change, so I'm going to merge it. Any concern, let me know and we can adjust the result.\nYes. Just didn't have chance to check this in context, but if I remember this properly, this change is good (and necessary)."} {"_id":"q-en-draft-ietf-jsonpath-base-63e8100b16b86d3d55422b29ac535d17eb8f1217ffdcc221d702560aba796bfd","text":"NAME Deleting the text should have addressed your original editorial issue. This PR also adds the missing cross-reference. If you're happy, please approve."} {"_id":"q-en-draft-ietf-masque-connect-ip-3cc265dcbe1c0969fcaa3382d99f015d06461bf438b2eb48076ca58e29e7cda2","text":"I'll just use a single example here because I think it makes it easier to express my point but the problems might be wider The URI template may include a variable, which is later defined in URL as . The value can only be between 0 through 255, I presume this is a decimal value. What is expected to happen if I send some bad values like: I presume that the recipient would reject the request with a bad path, by virtue of applying the requirement in section 4 that :path SHALL NOT be empty and following the expansion requirements in Section 3 and following those requirements on to Section 4.5. That's a lot of hops to get things right. Is there any easier way to present these?\nAgreed, we need better text to specify server handling. For example, we don't currently explicitly that the server has to percent-decode these variables (but we say they have to be percent-encoded). When we add new text about how to extract the variables and decode them, let's also say that the server has to validate that they match requirements and has treat the request as malformed if they don't\nNAME can you write up some text?\nAppear to look good."} {"_id":"q-en-draft-ietf-masque-connect-ip-1ac49ff4455263e739b3e5ab597573a7515b88093e2013c973cc29ca382cb3e7","text":"This change adds text that addresses conveyed by ADDRESSASSIGN can be used in the \"source address\" field of an IP packet, while addresses conveyed by ROUTEADVERTISEMENT can be used in the \"destination address\" field. While we're here, also add a small clarification when a prefix is assigned/advertised that any of these addresses can be used in the source/destination field, respectively. A future extension could create semantics for these addreses, but that is not called out in the text at this time.\nI'm know I'm late looking at this but thanks as this small clarification is really helpful\nThe current text in this draft needs to be updated to match the changes from URL I'll write up a PR once existing conflicting PRs are merged.\nNAME is this unblocked now?\nWe need to clearly state that ADDRESSASSIGN defines a set of addresses the receiver of the capsule can use as a source IP address on packets it generates We need to clearly state that ROUTEADVERTISEMENT defines a set of addresses the receiver of the capsule can use as a destination IP address on packets it generates Should we rename the capsules to make this relationship more clear in all use cases?\nLet's bike shed the name here NAME NAME NAME\n(to clarify Tommy's comment - if anyone comes up with a new name they like, they should post it here. If not, we'll go with advertisement and withdraw)\nThanks!"} {"_id":"q-en-draft-ietf-masque-connect-ip-b0f18b5df27c90ccdaf3b47fee28013a12e00c7db11edc1b9d8fb2bc3c49c86a","text":"Aligning CONNECT-IP with the changes in HTTP datagrams and CONNECT-UDP.\nNAME when do you plan to merge the other PRs?\nNAME I plan on merging the other PRs as soon as we have WG consensus, which (assuming no objections) should happen at the end of the month (see ) Hi folks, During today's interim call, there was strong support to proceed with the output of the Design Team. This email begins a call to confirm that consensus. As a reminder, this consists of the following changes to draft-ietf-masque-h3-datagram and draft-ietf-masque-connect-udp: URL URL Please review these changes and send your comments to the list. This consensus call will conclude on February 28. Thanks, Chris and Eric"} {"_id":"q-en-draft-ietf-masque-connect-ip-12588bd5348e6d47ed076c8778ed9fb34cc3380ae7c20dcd137df3c1645d3a69","text":"Connect-IP enables the Masque client to set the source address for any IP packets it requests the MASQUE proxy to forward to the external network. This enables the potential for source address spoofing by the MASQUE client or an entity prior to it. For proxies that assign addresses to the client and allows general access to Internet I think it would be really good to recommend or require the masque server to perform source address validation. However, there clearly exist some use cases where the MASQUE will be used to establish a virtual path between two networks and where source address validation is hard as neither network may be stubs networks. However, I think there are several things to address in the document. 1) Include in security description a discussion of the risks here and when it would be useful to perform source address validation. Especially for the assigned address(es) use case where rejecting all but the provided address range would be possible. 2) Consider if the security risks for certain deployments are such that source address validation is RECOMMENDED or REQUIRED\nI don't think this is correct. I thought there was text in the draft that enforced that the source address was in the set of addresses the endpoints agreed were routable. [4.2.1] ADDRESSASSIGN capsule says \"Any of these addresses can be used as the source address on IP packets originated by the receiver of this capsule.\". I suppose this means we need a clarification that a MASQUE proxy MUST drop any packets whose source address is not in the set of addresses agreed upon in ADDRESSASSIGN capsules.\nI think we can write a PR that addresses this. Here's a rough shape of a paragraph: how does that sound?\n+1 to this text."} {"_id":"q-en-draft-ietf-masque-connect-ip-69ffba9c33abb0faa9586c3d1e7e8fc9529b84e474b883ce8382cfbd68229fd8","text":"NAME let's discuss that on . I'd like to merge this PR first then decide on a plan for before submitting the next revision to the datatracker\nSplitting {ADDRESSREQUESTROUTEADVERTISEMENT} into ...V4 and ..._V6 would result in smaller capsules that have fewer error conditions. It would also keep IPv4 and IPv6 logic distinct.\nI don't think the size of these capsules matters given their rarity. I can see the error condition argument but it's pretty minor and would come at the cost of making the specification longer. I don't feel too strongly here, but I think I have a slight preference for keeping things as they are.\nI agree with David that size is problem not a big concern. I think having one set of capsule is less complex.\nI also have a preference for keeping the capsules as they are with versions included; it also allows the route advertisement capsule to include routes for both v4 and v6 in one capsule, which I think is very important. I wouldn't want to lose that feature.\nI similarly have a preference for keeping v4 and v6 in one capsule and agree size is not a big concern.\nI just chatted with NAME to see what error conditions he was thinking of. Upon discussion (v6 route advertised when only v4 address is available), we concluded that this is not an error case, it's just a route that's not yet usable. We've concluded that this is something that we're comfortable leaving as-is.\nImplementing this on the client I was wondering if there should be a way for the client to ask for the server for an address without specifying what address it wants. This could allow a VPN client to mention that it wants an IPv6 address or not. If we think this could be useful, we could encode it as or .\nI believe asking for the Any address is what we discussed previously, so +1\nYou mean ADDRESS_REQUEST(IPv4-0.0.0.0/32) don't you. I think the format of the request works fine. But we do need to before we implement this into the specification.\nYes — if we have this, one option is to always have an ADDRESS_REQUEST expected from clients, and then there doesn't need to be signaling. Or we do the simple solution to 54 to say \"I expect to send N capsules with my request\".\nYou're right this could be a potential solution to . Let's discuss them together tomorrow\nRegarding IPv6, I think a standard request that would make sense is asking for any /64. I guess that works fine with this idea too?\nOh that's interesting, if the IP Address is or , the prefix length is a hint for the requested length. I like that.\nIETF 114: Add an editorial sentence to \"remember, what you ask for may not be what you get, especially with regards to address families\". A request should be paired with a response, multiple requests can just say \"you have the same ones you had before\". Clarify that it's cumulative. Also solves unassign/removal of addresses.\nSo there appear to exist some downside with the timing of the interactions when one want to use Address-Request capsule. So the basic problem is simply that the proxy will assign an address prior to the Address-Request capsule has been received, thus preventing the proxy from processing the request in the right context. So lets assume that the client has already created one connect-IP request with a limited scope and been assigned IP-A. Then it want to make an additional request for another limited scope proxying and would like to get the same address assigned. Due to the separation between the HTTP request for the Connect-IP and the capsule Address-Request a server can actually process the whole HTTP request prior to receiving the Address-Request. Resulting potentially in that another IP address assignment (IP-B) be provided to the client, despite it asking for IP-A to be assigned also to the request. So does the HTTP request needs an indicator parameter to tell the proxy to wait for one or more \"request\" capsules from the client before processing the request?\nOut of curiosity, in what scenario would the client \"like to get the same address assigned\"? It makes sense for the proxy to decide to reuse addresses to save its address pool when the scoping allows it, but why would the client care?\nI could imagine this happening if the client was trying to resume some previous session that it lost due to an outage? Seems to be niche, but it's certainly possible.\nSo the most obvious use case for this is to request iproto= CMP in combination with for example UDP. Then getting ICMP for another IP address then the one you already have UDP for is mostly useless if you intended to use the ICMP messages for hints when things fails for you UDP based protocol. I would note that for an endpoint liking to do P2P NAT traversal stuff getting an IP address rather than using Connect-UDP does make sense as it ensure the client has control. It enables the IP address for its UDP traffic to be Endpoint-Independent Mapping (RFC4787). This might be niche usages but I would like to avoid creating artificial barriers that makes things impossible or at least hard to realise.\nThanks, this makes sense. Should we suggest that clients bundle the ADDRESS_REQUEST in the same packet as the HTTP headers and suggest that servers check for those before responding?\nI am still worried that this creates issues as your request may not fit in a single packet to begin with. My solution would be to actually move the address request part into the HTTP request itself as header field. That would ensure an atomic operation and no risk for errors.\nMagnus, is your proposal to require, or to allow as an option, for the address request to be moved into the HTTP request? Why not bundle the capsules over a reliable delivery mechanism instead?\nI first of all want to ensure that you can do a request that ensures that for those cases when it is necessary to interpret the Connect-IP request in the context of desired IP address it can done. Without uncertainties or requiring to have wait time to see if there are capsules incoming. So I think primarily allow as an option. I think a question back to your usages, do you have cases where one would need additional capsules to correctly interpret the option. Should we look for an other solution than the one I proposed to better cover the necessary functionality?\nI want to be able to continue to do ADDRESS_ASSIGN capsules during the lifetime of the connection to support e.g., IPv6 Privacy extensions but without having to delegate the entire subnet. So I don't want to move to header-only.\nI think NAME is describing something similar to the / capsules that we had in the original Google . The only difference is that we would add an HTTP header to say \"start off in atomic mode and wait for atomic end before sending addresses and such\". While I think this does sound useful, does it need to be in the core spec or could it be an extension?\nIs there any reason to not bring back the capsules?\nThe only reason I can think of is to keep the core spec as short and simple as possible. But if we think this is an important feature then I'm not opposed to bringing back .\nI agree with the sentiment of keeping the core spec short and simple, but the trade-off is also knowing what the minimum viable featureset we can always rely on is. If we conclude that capsules are in that set, that IMHO weighs towards including them. Given that both Magnus and I have now presented use-cases for atomicity, I think this finally swings towards including?\nI agree with that\nSo why I used atomic here, was due to the split between HTTP and Capsules. So I don't see this issue currently motivates the need for ATOMIC capsules. I really would prefer that the necessary parameters for the CONNECT-IP request are in the HTTP request, either in the URI or in an HTTP field. To be clear, I understand the need for the capsules as they are useful for doing things after the initial request, but creating this binding between the HTTP request and the capsules are creating its sets of complexities. Likely bigger than duplicating the carrying of the necessary information results in, especially when they may be use cases related.\nI agree I with Magnus and think we should specify a ay to add the address request to the http request. If there is a timing dependency it's better to put all needed information into the same request than specifying extra and complex logic to achieve the same in hindsight. That doesn't mean that we don't need the address request capsule (for later use in the connection). I think we need both.\nI dislike having two ways of conveying the same thing. How about a simple HTTP header that tells the receiver that there are 4 capsules coming right after the HTTP headers?\nI'd prefer to avoid \"atomic capsule\" starts and ends. If we need a way to guarantee inclusion of the local address in the first request, having it be in the HTTP request makes sense, either as another URI parameter, a header to say what address you want, or a header to say you will have capsules that need to be parsed before responding. That latter option is flexible, certainly, but could be over-engineered. Another option is to tightly scope a header to the use case we think is relevant. If the use case is \"please use the same address as this other request stream I already have for connect-ip or connect-UDP\", we could just have a header to say for any connect- request that it is entangled with another and should share some properties. I could imagine this even without CONNECT-IP. Today with private relay, we have to ask the proxies to prefer to use the same local IP for everything in a connection from the client, but letting the client choose what gets grouped might be even nicer. This option also lets you issue multiple requests, say they should share, without needing to know exactly which IP any of them will get assigned first.\nUnfortunately requests and headers are end-to-end but streams are hop-by-hop so there is no easy HTTP way to have a request refer to another request. I'd be supportive of adding the header but I'd push back on adding a header that duplicates a capsule. This is starting to feel like something best-served by an extension\nYou could solve the correlation by having a \"group\" ID that you attach to all related requests. No need to refer by stream ID.\nWhat is the scope of a group ID? Connection? If so it's the same as stream IDs.\nI think this should be some extension, but it could be a UUID too, doesn't need to be connection specific.\nAh, thanks. Yeah UUID would work. Back to the issue at hand, I propose this feature request get moved to an extension.\nAll of these alternatives strike me as much more complex that the semantics of . I'd rather we have a single mechanism even if it's slightly more powerful and restrain it's use than try to invent new ways to solve each problem that arises.\nAtomic doesn't help coordinate between requests on different streams, etc, so it's not really solving the full problem here. Either way, I think this is something to defer to an extension.\nHeaders don't solve that problem either, so I'm not sure that argument really makes sense in this context. Cross stream coordination seems out of scope to me.\nSo you are taking this into directions I did not intended. I am just asking for how to ensure that the server does not answers the client's request prior to having received any capsules if they are coming. And the real issue is that the server can't know unless the client either includes the corresponding data in the HTTP request or includes an indication that there will be capsules that are relevant to the requests. So, my main worry is if waiting for capsule to answer will actually work if there are an HTTP 1.x intermediary connection on the path between the client and the server. I guess its is sending the capsule after the request and hope it makes it. Because to my understanding in HTTP datagram the peer does not expect any data to arrive so it must be upgraded. Thus, one can end up in cases where the capsule protocol does not make it, where the HTTP request does arrive.\nWhile we might want to have a more generic extension to relate information to multiple requests, I think the problem Magnus describes needs a solution now. The problem is that there is a dependency between replying/react to a connect request and potentially information send in capsules. The best solution is to send this information together with the connection request instead. I don't see any concern to have a http request or URI parameter and the capsule. What's the problem about that?\nMagnus, what is \"the client's request\" here? Is it the extended-connect itself, or a response to a capsule? If it's the former, the server can't know what capsules, if any, are coming at this time, as we haven't yet established we are using the capsule protocol. If it's the latter, capsules trivially solve this, as you cannot process any of the requests until the capsule is received. Mirja, I'd like to avoid having 2 ways to do things, especially since putting it on the HTTP request is a less-generic mechanism and cannot be reused during the connection, whereas capsules can. If we believe this feature is important, I'd like us to solve it generically, rather than special-casing one use case we are aware of right now.\nSo the client's request is the HTTP request that is only complete in semantics when combined with the request-address capsule.\nSorry Magnus, I'm not sure I understand what you're trying to say. I don't think we can enforce a semantic that a higher-level HTTP request is only complete after some bytes on the then-negotiated protocol have been exchanged.\nBut this is really the issue, there is currently no way for a client to ask the server to only provide a targeted connect-IP forwarding if the IP address it can provide is the one the client asks for in a consistent way. My interpretation of the specification is that either; 1) the server will never take the request-addr into account initially for the request, and interpret the request-addr after it has created a Connect-IP forwarding 2) the server will sometime take the request-addr into account by have a short delay awaiting for the request-addr capsule, which may or may not make it in time for the implementation specific timer. This non-consistency or failure to address a use case that I think several was interested in resolving is a real problem. A problem I think we will have to take with us to the meeting.\nThis ultimately stems from us not having an explicit capsule/message to indicate that forwarding should actually begin. Right now it's implicit as soon as the capsule is assembled with the IP addressing information. This is leading to your proposal to stick more things onto the initial request, whereas I believe the correct solution is to add more capabilities to the capsule protocol to express these scenarios.\nI will consider what you are saying. However, I will note that we might have different goals. I do have a need for identifiable streams that can be referred to in extension signalling. Using capsules does not achieve that with the current design as the stream identifier comes from the HTTP request, not the subsequent capsule signalling.\nI hope so, constructive discussion relies on all of us considering what the others are saying Given that we're all trying to solve slightly different problems here, I see two outcomes: we have one simple generic solution that happens to work for everyone we punt this to more specific extensions since these problems are not generic\nSo I think the issue here is what we consider core functionality. My observation here is that Alex is promoting functionality that will make it easy to make quite rich things for the network tunnelling use case that you are promoting. While I am bringing up something that to me appear core for the single endpoint tunnelling. So I think we actually need to understand the issues as well as what usage and how usable this is in common and how to handle where the different use cases actually benefits from having the functionality in different places.\nI'll reiterate my suggestion from the meeting: it would be better to have IP-version-specific capsule types for each capsule. It saves a byte and also avoids problems with version mismatches.Thanks! I left 2 minor editorial questions, but feel free to merge as-is."} {"_id":"q-en-draft-ietf-masque-connect-udp-81338cb6a59b02bed63936b28e67927d5c2f19167186e281c386166ae633c6b9","text":"The technicalities here are fine but I think someone will pick up on the grammar sooner or later. How about something like below (take or leave it) \"Clients issue requests containing a \"connect-udp\" upgrade token in initiate an UDP tunnel associated with a single HTTP stream. The client indicates, to the proxy, the target of the tunnel using the \"targethost\" and \"targetport\" variables of the URI template (see )."} {"_id":"q-en-draft-ietf-masque-connect-udp-86d1f7de01269cab6f7d1628f2c36cffa708dd01341febafbb4863fc3b67997d","text":"Given that this draft is couched in terms of , the use of in HTTP/1.1 is surprising. At the least, a reason should be mentioned.\nClosing for now, as I have more serious concerns that may make this OBE.\nReopening."} {"_id":"q-en-draft-ietf-masque-connect-udp-d06a4f062b4a8d72483773d53d5819b19aa783b01ba5cce553f1249d0a2ba75a","text":"The client configuration section says: ... but the examples contain: Which is it?\nThis was an accident, I suspect it dates back to when the document referred to which contains both path and query. Fixing to mention both path and query via ."} {"_id":"q-en-draft-ietf-masque-connect-udp-5f934a07bf124316cc080790a9de811293464ba040132e506851de7e8982cf5f","text":"Is this strictly necessary?\nYou're right, it's unnecessary and doesn't add anything. Removing via ."} {"_id":"q-en-draft-ietf-masque-connect-udp-49e7758c5e5bad0d054f2c3dee6eb9db65352d6524cfd493778bcbcc0992e622","text":"PR in response to Al's OPSDIR review -- Reviewer: Al Morton Review result: Has Nits Hi David, this is the OPS-DIR review. I tried to stay in my lane. I found this paragraph confusing (which doesn't mean it is incorrect, just that someone with limited background read the section 2 requirements list, all MUSTs, and then the paragraph below, which seems to have a conflicting MUST with the SHOULD and MAY that follows). Help readers like me understand the options you are allowing here: If the client detects that any of the requirements above are not met by a URI Template, the client MUST reject its configuration and fail the request without sending it to the UDP proxy. While clients SHOULD validate the requirements above, some clients MAY use a general-purpose URI Template implementation that lacks this specific validation. I see that Christer's GEN_ART review picked-up on this too (I looked after I was done...). In Section 5, there seems to be a possible operations issue for proxy operators: Clients MAY optimistically start sending UDP packets in HTTP Datagrams before receiving the response to its UDP proxying request. However, implementors should note that such proxied packets may not be processed by the UDP proxy if it responds to the request with a failure, or if the proxied packets are received by the UDP proxy before the request. This seems like a good place to limit the amount of optimistic traffic, given that the request is not yet accepted. (also, would the optimistic traffic use Context ID zero?) In Performance Considerations UDP proxies SHOULD strive to avoid increasing burstiness of UDP traffic: they SHOULD NOT queue packets in order to increase batching. This requirement is written qualitatively, so users might \"know a violation if they see it\", but not hold the proxy system/operator to any specific performance without providing more details here. Inter-packet delay variation measurements on proxy ingress and egress would characterize increased burstiness well. Another solution is s/SHOULD/should/ (I doubt some increased burstiness will be avoidable, or enforceable.) Thanks in advance for considering these comments, Al"} {"_id":"q-en-draft-ietf-masque-connect-udp-64daf0b4dbcb06bbcfd470f01312d58dbde2d10e167aa00de2e79c67fc12f25a","text":"This text was a mistake I made when converting from Datagram-Flow-Id to capsules.\nSection 4 states that the proxy responds to a CONNECT-UDP request with 2xx and a corresponding registration capsule to indicate end-to-end support of datagrams. If the client sends a REGISTERDATAGRAMCONTEXT capsule, the corresponding capsule from the proxy will be of the same kind, and it will register a new server-generated context ID. So now we would have two context IDs for the same stream, which seems a bit unnecessary. Could we simply omit the step where the proxy sends a registration capsule and simply state that a 2xx response to the CONNECT-UDP request is sufficient indication of end-to-end datagram support?\nMy reading of the spec is different, URL says So for example, if the client sends a request on stream 4 and then REGISTERDATAGRAMCONTEXT { Context ID (0)}, the server will respond with REGISTERDATAGRAMCONTEXT { Context ID (0)} to confirm that the context is accepted.\nThat makes sense, I was assuming that the \"used in both directions\" referred to datagrams. I think my confusion in part came from the same section that says: But sending a registration capsule with an already registered context ID is not a new registration I suppose. Maybe it's good to clarify this a bit in the datagram spec. I'll open a separate issue there.\nI agree the draft is a bit ambiguous here.\nIndeed, the spec is unclear here, my apologies. I got this wrong when I rewrote the old Datagram-Flow-Id text. The server doesn't need to echo the registration capsule here since the client's registration is bidirectional. NAME we should add text to HTTP Datagrams to clarify what we want to do about echoing registrations, is it required allowed forbidden I'm leaning towards (3) unless we have a use case for communicating the fact that a context is \"accepted\" and in which case I'd say (2).\nAha! In that case I apologise too. This seems important for interop! Now I remember, I think some people felt registration ack wasn't needed as long as there was a way to reject. And we have that with CLOSEDATAGRAMCONTEXT. So I think (3), modelled as implicit accept and explicit close, sounds good. That cuts down the chatter (and probably simplifies some code in my implementation)."} {"_id":"q-en-draft-ietf-masque-connect-udp-3d4b453fefbe7481379e20797fdcb7d57bb450a3f26c389c2abede4ee0a3e7fe","text":"A proxy can receive a UDP packet from an origin to a client that cannot fit within a DATAGRAM frame, given either the limit set by the maximum datagram frame size, or based on the current path MTU. In this case, one solution is for the proxy to generate ICMP errors back to the sender. If that sender implements PMTU discovery, it should adjust its size.\nI think adding advice here is good, though we can't mandate sending ICMP because most OSes don't let user space applications send ICMP.\nYes, we can't mandate it, but we should recommend it I think. For a full-blown production server, it should be possible.\nRecommendation seems sensible. Although this does get me thinking about what the expectations of fragmentation are, we can't expect all UDP protocols to set DF.\nThe DATAGRAM capsule type offers one way to address this. Capsules can be arbitrarily large, because they'll be reassembled by QUIC. If it won't fit in a QUIC DATAGRAM and you don't want to drop it, it goes in a DATAGRAM capsule. Now, that might not always be desirable, because capsules don't have all the behavior that causes you to choose datagrams in the first place; a proxy that behaves that way will mess with PMTUD, and in the degenerate case cause the server to select an MTU back to the client that always gets encoded in capsules. Perhaps the client should be able to inform the server whether it prefers oversize messages to be tunneled or dropped?"} {"_id":"q-en-draft-ietf-masque-connect-udp-1bf222cae3320f87e17b3b727f8a81411546e88d6c667de45f02ae355dc344cc","text":"recovery (e.g., [QUIC]), and the underlying HTTP connection runs over TCP, the proxied traffic will incur at least two nested loss recovery mechanisms. This can reduce performance as both can sometimes independently retransmit the same data. To avoid this, HTTP/3 datagrams SHOULD be used. This text could maybe be clearer. Is it trying to say \"You SHOULD use H3 + Datagram\" or \"You SHOULD use H3 and if you do you should use datagram\". I.e., is it possible to use H3 but not datagram?\nYou're right, this is unclear. That text predates the refactor of HTTP/3 Datagram to HTTP Datagrams, so we should tweak it to say \"To avoid this, UDP proxying SHOULD be performed over HTTP/3 to allow leveraging the QUIC DATAGRAM frame.\""} {"_id":"q-en-draft-ietf-masque-h3-datagram-6b45e69b7790e6accd185440f629bf075033ad27ff3013c16370fd9769355a77","text":"This may be answered and I just missed it. After reading I think that for HTTP/1 and HTTP/2 Datagrams (via capsules) can only be sent for new methods/upgrade tokens. But it wasn't clear to me if that same property was true for HTTP/3. For example, would it be valid to send HTTP/3 datagrams along with a GET request? If so, I think this means that for HTTP/3 a GET request could have both a body AND datagrams?\nThat's a very good point. If anything, we should be prescriptive about what endpoints do if they receive a datagram associated with an existing method like GET. I suspect drop or error are the most likely candidates here.\nI vote for close the connection with error. The datagram has the relate to something in the request. If the request doesn't or cannot articulate a datagram use, the sender of datagram is probably broken or stupid.\nSince datagrams are strongly associated with a stream, I think it'd be more consistent to make this a stream error. Regardless, folks my want an extension to GET that allows sending datagrams with it, but it makes sense to require that this extension be negotiated via headers or settings first, so the semantics of such datagrams are agreed upon beforehand. So, I'm supportive of making this a stream error, with the understanding that any extension can lift this requirement between consenting parties.\nI'd prefer to make people suffer but I could live with a stream error. Like you suggest, by letting people people agree on a change of requirements, we support extension. But the default of erroring helps datagram avoid the mess othat GET with body is.\nAlso, I reserve the right to count the number of these failures and close the connection if it reaches my annoyance threshold\nAh yes . Sounds good, I'll write up a PR for this once the design team PRs are landed.\nSounds good to me. An extension could just as easily lift a connection error requirement as it could a stream error requirement in case that had any bearing on the thinking here. (A connection error might be nice because it prevents a confused peer from sending a possibly incessant stream of datagrams. On the other hand, confused peers gonna be confused and David's point about datagrams being strongly associated with a request is a good point)\nI think it should be a stream error to enable method-agnostic gateways and proxies, which may be multiplexing different clients' traffic onto a single next-hop connection.\nIn , NAME points out that Content-Type doesn't make sense with the capsule protocol. Should we prohibit it?\nIn , NAME points out that status codes 204, 205, and 206 don't make sense with the capsule protocol. Should we prohibit them?\nThe spec currently says: In HTTP/1.x, the message content is generally delineated by the Content-Length header field or the Transfer-Encoding; another request can follow on the same connection. One of the (many) ways in which CONNECT and Upgrade are special is that in H1 it takes over the connection and makes the request extend to the end of the connection, preventing any further requests from being made. The description attempts to be generic about method, but implicitly assumes an indefinite length stream of bytes which (in H1) is only true of CONNECT or Upgrade. This concept is correct for what we have in mind, but the boundaries aren't crisply described right now. NAME may have opinions.\nThe sentence right after the bit you pasted is: That constrains this to CONNECT/Upgrade/new methods. Is that not enough?\nI think \"new methods\" is probably the most questionable. It's not clear to me that a new method can obtain the special behaviors of CONNECT. If it were restricted to upgrade tokens (whether with HTTP/1.1 Upgrade or RFC 8441 Extended CONNECT), I think that would address the concern.\nI believe what he means is this definition of a data stream only holds for CONNECT requests. There is no mechanism to extend that to new request methods since all other methods are required to have length-delimited data (or no data).\nLikewise, I don't know of any reason we would want to support such extension methods in the future, since that is a known failure point for CONNECT. It is far simpler for a new method to require chunked or data frames.\nRestricting this to extended CONNECT and Upgrade sounds reasonable to me\nFor Upgrade, the new protocol takes over after the current response is complete. Hence, the description above doesn't really work for Upgrade either.\nNAME the intent of that text was to mean \"after the current response is complete\" - do you have a proposal for phrasing that would work better?\nSemantics says: URL I think is consistent with the definition of \"data stream\" in this document. (And in fact uses the term \"data stream\" without defining it.) That doesn't seem to match deferring the new protocol until after the response.\nI don't think there is any technical distinction between the two descriptions; it's nomenclature to say that what follows the CONNECT is a tunnel ended by connection closure, whereas what follows Upgrade is a different protocol that might very well come back to this one (as determined by the other protocol).\nAnd point to URL This is an important part of the security analysis.\nOnly if you believe that the Sec- prefix is useful for the Capsule-Protocol header field. I don't.\nSorry, I don't mean that it's the only thing that makes it work, merely that if were going to talk about how to analyze it, this is something we should say.\nNAME can you clarify your intent? Your issue title starts with \"Add to JS\" but I suspect you didn't mean that we should modify the JavaScript specification. Would a note next to where the \"Capsule-Protocol\" header is defined resolve this?\nAaargh. I meant \"add to the security considerations\".\nThanks, that makes more sense :) Adding something to security considerations sounds good to me, I'll add text to the . Note that this was filed on the wrong repository, is the droid you're looking for.\nSo this assumption is useful in that it helps us avoid an analysis. Though my sense is that that analysis isn't especially difficult. The header field exists to enable datagram forwarding behaviour in an intermediary. We only have to show that there is no harm inflicted if a client engages that behaviour for requests, under the constraint that the client is otherwise authorized to make from a web context. The risk comes if this forwarding behaviour somehow generates different semantics as a result. As the same client also provides the stream of data that will be forwarded, it is trivial to see that they are already in a position to engage whatever semantics they choose. The only scenario that might benefit an attacker is where the elision of datagrams (or things that look like them) might be used to evade security protections that scan the stream of data. For an attack to be effective, those same protections would have to be ignorant of datagram forwarding behaviour at the same intermediary. This binding to CONNECT is a useful shortcut that means we don't need to engage with all that stuff, which wouldn't be that easy to write down. (This aversion to engagement is, I believe, a large part of why people want to slap \"Sec-\" on all new headers.)\nMT's framing of this is helpful. Adding some security consideration about what might go wrong for an intermediary seems potentially useful. Spending text on why a browser is special in this regard seems less compelling to nme.\nI was about to write a PR for this, but I think we should first If we decide that capsules are only allowed for HTTP upgrade tokens (and not new methods) then we've restricted ourselves to CONNECT and Upgrade which are both inaccessible from JavaScript.\nLGTM thanks"} {"_id":"q-en-draft-ietf-masque-h3-datagram-5f2171df0ab9f415d5eada9d0120c4f52c8070a767045312d378961e2f37b8c4","text":"In Section 2 we start with the statement So let's just use that clearer description in the abstract and introduction too,"} {"_id":"q-en-draft-ietf-masque-h3-datagram-896c9afe8b4ccf3abbd83a25ea7bfae08beb9a4622a3187d03f534f577fb8e1d","text":"This is a good step but I think it overlooks one of the points that Magnus articulated. If an HTTP extension does what we say here, it might fall foul of intermediaries and protocol version conversion. That not some we have to solve but I suspect if we can concisely describe those considerations we would address the issue.\nGood point, added text\nThanks, this is much clearer and is a descent explanation of the issues and motivation behind the should. I guess a more correct part will require much more text.\nThe first SHOULD should be a MUST because there is a clear exception specified but in all other cases this is a MUST. The second SHOULD should really be a MUST. Or why is that a SHOULD? If there is a reason for a SHOULD this needs to be explained but I think it would break interoperability.\nI personally preferred MUST, however the consensus of the design team was SHOULD, and that was confirmed by the WG. Let's no reopen that unless we have new information.\nOn , NAME said:\nI think the main motivation was that the MUST was unenforceable, and that there will be intermediaries that explicitly want to block unknown extensions for security reasons.\nWhatever we might prefer, isn't this a case where whatever the specification says, intermediaries will do what they want?\nI agree. That's why in the interest of making progress I'd rather move forward with the design team consensus instead of reopening this topic.\ndesign team outputs don't have any special status here; perhaps the answer is not to say \"SHOULD this\" or \"MUST that\" but to instead articulate the intent behind the design and the potential consequences of not complying with that intent. Perhaps the key reason for mandating forwarding is that the meaning of various protocol elements is only valid in the context of all of the messages that an endpoint says. That is, you might send a bunch of datagram capsules that have meaning derived from other capsules. Remove the other capsules and you change the meaning of those datagram capsules. An intermediary that adds, modifies, or removes any capsules needs to understand the totality of the effect of their interventions. This is why I remain skeptical about the value of the generic capsule protocol. It only has value if it can be acted upon generically, but it is not obvious that it can be. Put differently, the existence of a generic capsule protocol implies a number of constraints on the operation of extensions that are not obvious and not clearly articulated. NAME might be asking about similar effects in .\nSo lets start with explaining the interoperability issue that is at the core of this: So the draft currently has several statements that interact with each other to cause this interoperability issue. Section 3.5 says: 'Note that use of the Capsule Protocol is not required to use HTTP Datagrams. If an HTTP extension that uses HTTP Datagrams is only defined over transports that support QUIC DATAGRAM frames, it might not need a stream encoding.' Section 3.5 also says: 'An intermediary can reencode HTTP Datagrams as it forwards them. In other words, an intermediary MAY send a DATAGRAM capsule to forward an HTTP Datagram which was received in a QUIC DATAGRAM frame, and vice versa.' Thus the following may occur: Endpoint 1 sends a HTTP datagram as HTTP/3 Datagram over QUIC datagram. The HTTP intermediary re-encode this blind of its context to an Capsule Datagram and forward it to another HTTP intermediary that forwards it to an HTTP/3 endpoint using Datagram Capsule. As this HTTP datagram usage is one that does not require to support of Capsule Datagram this endpoint is not setup to handle this Datagram capsule and discards it. So this can either be fixed in various ways, not only what this issue text suggests: requiring the Datagram capsule support for all HTTP datagram handling nodes. In other words change the first quote. Require the HTTP intermediary to re-encode any Datagram capsules to HTTP/3 datagram if the next hop is know to support it. i.e. make additional requirements on how HTTP Intermediaries act when forwarding. Forbid blind forwarding of HTTP datagram in HTTP intermediaries. I think we should settle this part before going back to see how the text Mirja suggested changing matter. I think 1 and 2 are both working solutions, and would like to avoid 3. I think there is also a question on the implication of what it means to support HTTP datagram when one is an HTTP/3 endpoint or intermediary. Who is expected to support the Datagram capsule if the next node is not capable? So I think at the bottom this issue is really the question about the possibility to convert between HTTP/3 Datagram and Datagram capsule. I think we need to settle that question first. It is a question of what requirements one have on endpoints vs intermediaries. Lets start in that end.\nI actually think I prefer your option 3, caveated by saying \"if you're an intermediary and you don't know the upgrade token, don't insert datagram capsules\" . What concern do you have about option 3?\nOkay, I am skeptical against 3) on how it may make deployment of future extensions that require HTTP Datagram but doesn't need other than Intermediary forwarding. In that case deployment could be hampered if not all HTTP intermediaries are also HTTP/3 capable. It appears that using capsule protocol would be easier to enable over a legacy system. That might be a misconception from my part.\nNAME Thank you for clarifying, now I understand where the disconnect is. The draft currently says and you interpreted that as but that's not what we intended, we meant . I've written up to remedy this\nNAME great that we managed to sort out what was the issue. As Lucas noted in the PR unless all HTTP Datagram implementation support and accept receiving Datagram capsule, even if not intended by the HTTP Extension, if unaware of HTTP extension transformation is allowed, it can fail. I think we need a bit more feedback on where people stand in this aspect.\nThanks NAME If the HTTP extension doesn't use capsules, then the implementation won't parse the data stream as capsules so there is no risk of accidentally receiving capsules. We added text to to clarify the risks of not using the capsule protocol.\nI think this was closed prematurely because the issue originally was about MUST or SHOULD and I don't think we concluded on this. If the argument is that this is not enforceable, I think we should not use normatively language at all (and say something like \"intermediaries are expected ...\"). However, using SHOULD were we actually mean MUST, weakens the requirement and risk that this is misunderstood to be optional (while I think this is the core of the whole capsule protocol).\nReopening to discuss SHOULD vs MUST.\nI'd consider capsules to be similar to header fields (headers specifically, not trailers). Here's what HTTP says about them in URL >A proxy MUST forward unrecognized header fields unless the field name is listed in the header field () or the proxy is specifically configured to block, or otherwise transform, such fields. Other recipients SHOULD ignore unrecognized header and trailer fields. Adhering to these requirements allows HTTP's functionality to be extended without updating or removing deployed intermediaries. Yeah, in actuality an intermediary can do lots of stuff, and in practice they do. But if a MUST is good enough for HTTP headers, why isn't it here?\nFrom memory it was either NAME or NAME who cared strongly about this not being a MUST. I'm fine either way as long as we don't drag this one needlessly.\nTo summarize the discussion on this issue up to this point: we're debating whether to tell intermediaries that they MUST or SHOULD forward unknown capsules unmodified implementors of intermediaries will not react to either one of these words differently endpoints have no way of enforcing this MUST some folks felt very strongly against MUST the working group landed on this design and no new information has surfaced since Based on the points above, there is no reason to hold up document progress based on a single word that will not have any impact. Therefore I'm closing this issue with no action. Hearing no objections, we’re declaring consensus. Thanks again to the participants of the Design Team for their hard work in preparing these changes, and to everyone else for offering reviews and feedback! Best, Chris and Eric"} {"_id":"q-en-draft-ietf-masque-h3-datagram-22ea6c58fee4f52524884e083a6fc93c5c0a022c2db30b9d2621e2470dc6e475","text":"As discussed at IETF 110, some MASQUE servers may prefer to avoid sticking out (i.e. they may wish to be indistinguishable from a non-MASQUE-capable HTTP/3 server). The H3_DATAGRAM SETTINGS parameter may stick out. Therefore, we should add a note about this to the Security Considerations section. A simple solution could be to encourage widespread HTTP/3 servers to always send this."} {"_id":"q-en-draft-ietf-masque-h3-datagram-56b6dfbdbcfd784ebc277392e39e6da0aa8596f236e74e8ae81c706455512bbb","text":"This change allows clients to disable contexts for a given stream. This uses a new REGISTERDATAGRAMNO_CONTEXT capsule for negotiation. The important property here is that it requires capsule support even when contexts are not required. This is critical to the health of the ecosystem as it will ensure that intermediaries that support datagrams but not capsules cannot be deployed, as that would prevent future extensibility.\nEditorial comments aside, I support the protocol feature of an optional DATAGRAM Context ID. Using a registration capsule, as proposed in this PR, is a very logical way of achieving that. I plan to implement this in my WIP code.\nLooks like there's support for merging this. Some additional editorial work is needed, but we'll do that in followups.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nLooks good overall."} {"_id":"q-en-draft-ietf-mops-streaming-opcons-84ab13ebd4a52d98b59a83b508dcdfbace60b7170c50d9246e182f974d08c2a7","text":"This tries to address both of:\nDiscussion from editor's conference call: Can we provide a reference for useful properties? Network analytics, application analytics, or both? Currently application analytics, which is OK. Is that clear enough? Who is supposed to notice changes from an expected baseline? Is mentioning QLOG helpful? Have we said clearly enough that baselines themselves can fluctuate, before the kinds of path characteristic changes we're talking about kick in?\nI think just \"baseline\" is clear enough to me, in context with this section listing some relevant stats. I guess \"baseline pattern of some relevant statistics\" is maybe more precise, but I'm not sure \"baseline pattern\" really gets there.\nfrom NAME 's review (URL) Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder.\nThis comment helpfully points out the changing network architecture (a LOT of wireless), and we should be clearer about what we're talking about in Section 3.2 (our mental was mostly wired) and in 5.5.3 (where we focused on wireless). This material should go in Section 3.2, and should be referenced in 5.5.3..\nRelated to\nFrom NAME (URL) Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful.\nRelated to\nThis works for me. I'll merge it. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section"} {"_id":"q-en-draft-ietf-mops-streaming-opcons-b082ecc0ce0e2fe0bcfc25a977ff2b521ff38a90dc8a6a166cb6eb2b4e4126e1","text":"This is based on the post-IETF 107 MOPS meeting where we talked about this. I chased some more references.\nWe should try to improve this reference - it's a long page, additions are at the top, so we have to either scroll for a long time or search for the date to get the relevant part of the page. Maybe reach out to ATT to get fragment tags that work?"} {"_id":"q-en-draft-ietf-mops-streaming-opcons-56998967af0798ec0d9556c5e297bdc196ea219d6922432d1c594ac7e30b868b","text":"Ali, this PR looks nice. Two meta-issues (which we can talk about whenever). I changed my references in the draft to a more complete format (using ins: can result in odd formatting in the References section). We don't have to do that for all the references in -04, but that would be a good thing to do in an editorial pass. I see that you're making some terminology changes (from \"delivery\" to \"streaming\", stuff like that). I agree with the changes you made in this PR, but we may want to make sure that they are made globally, for consistency.\nI was not sure about the reference format, normally I just use the casual format: Authors, \"Title,\" Journal/Conference, Date. I saw the missing author names and realized \"ins\" would do the trick. By no means, I am tied to that format - don't know even know what \"ins\" stands for here :)\nFrom URL : \"as in the author list, ins is an abbreviation for \"initials/surname\";\" I tend to think an editorial pass on references would be better as a separate commit, where they need cleaning. Anything that somehow made it in and breaks the \"make\" needs fixing, but others maybe later?\nI suggest a definition along these lines: Streaming is transmission of a continuous content from a server to a client and its simultaneous consumption by the client. (\"simultaneous\" is the key here) This has two implications: 1) Server transmission rate (loosely or tightly) matches to client consumption rate. That is, no buffer overrun or underrun is desirable/acceptable. 2) Client consumption rate is also limited by real-time constraints as opposed to just bandwidth availability. That is, client cannot fetch the content not available yet.\n+1, this sgtm. But this probably implies a refactor of the intro section and the abstract, to better focus on the relevant use case.\nI can take a crack at this.\nAli has text for this, will create a PR for the issue."} {"_id":"q-en-draft-ietf-mops-streaming-opcons-ba041c9484371dbad8ddb2942d5cad49bb5710c8783a678252aba545b4543550","text":"We've been getting help from people, but the ACKs section hasn't changed much since -03 was submitted in November 2020.\nI found like two names that we had missed, and added them to the ACKs section. I think we can merge this any time."} {"_id":"q-en-draft-ietf-ppm-dap-ca2f45ac35d14ad725035c7650b5d91ded1b88e28e2177ad509322b7d33eb746","text":"Section 3.1 says to include the PPM task ID in a field on the problem details JSON object, but the task ID may be any 32 byte value, and may not be representable as a Unicode string. I propose that we base64-encode the task ID here to resolve this."} {"_id":"q-en-draft-ietf-ppm-dap-8f4aeea0cf523e4f6d9b4ae29d482793a6496b2e4ffb60bda97a9e7e99f4c4e8","text":"Would be helpful to readers if this reference was fixed in the Editor's Copy: [I-D.draft-cfrg-patton-vdaf] \" BROKEN REFERENCE \".\nMaybe change each reference to ? That way it'll render like in the text, which reads a bit nicer."} {"_id":"q-en-draft-ietf-ppm-dap-f95573723f24e74e04e03115fc42113f2c9bce52c70474109fb422b118a66ea6","text":"The only language I can find in the document that prescribes serialization is in Section 1.2 (please point out if I missed anything): We intend this statement to be prescriptive, i.e., encoding/decoding of messages is as defined in RFC8446. Another valid interpretation might be \"we define the fields of each message following the conventions of RFC8446, but leaving encoding/decoding unspecified\". I think it would help to refine this statement a bit. WDYTA:\nSGTM FWIW (this was our interpretation, and I think it makes sense to clarify)\nYeah, this seems like a fine clarification. I don't think it was under specified before, but more information for those unfamiliar with TLS-style encoding doesn't hurt."} {"_id":"q-en-draft-ietf-ppm-dap-e5c5d5d558d17abf7e36b3429c3ade2942f14578d77fba590d2bc2bc435f818b","text":"This change is primarily editorial and includes the following changes: Replace references to \"Prio\" with references to a generic VDAF. (This section was written long ago when we had Prio in mind.) Elaborate on known issues for collect requests. Discuss Sybil attacks, including enumerating the different types ().\nFrom dkg in Vienna:\nIt seems like there are a bunch of assumptions about Sybil attacks generally that are not well articulated yet. This would certainly help.\nThe list of attacks is really a list of things we ought to fix in the protocol, so I would not even bother including them in the security considerations. If we want to use this as an opportunity to identify open issues that need to be addressed before we consider the protocol secure, I would simply list the open issues and keep discussion in GitHub. (That is, the text here seems redundant with the issue )"} {"_id":"q-en-draft-ietf-ppm-dap-dbd68199431972862df22187188bb885ebd6ad2218d66e60a00d5e20717c5160","text":"Specifically: Specify use of the \"one-shot\" HPKE APIs defined in RFC9180, replacing equivalent use of the \"multi-shot\" APIs. (This change is intended to be a clarification, not a functional change.) Move the task_id parameter from the \"info\" parameter to the \"aad\" parameter. This is intended to protect against key-commitment-related attacks[1]. (This is a functional change.) Clarify that the URL field is an encapsulated key, rather than an encryption context. (This is a clarification, not a functional change.) [1] See discussion in URL\nPer URL section 8.1: PPM effectively uses the single-shot APIs; we should consider refactoring PPM's usage of HPKE to use only one of applicationinfo & AAD. Following the advice of the HPKE RFC, we would want to use applicationinfo. This would permit slightly simpler implementations. I am not sure how much it would simplify the HPKE computations.\nShould we then formally use the single shot HPKE APIs from ?\nThat would be my recommendation (though, somewhat confusingly given the advice quoted above, those APIs still allow specifying both application info & AAD). Implementations can & do already use the single-shot APIs, so I strongly suspect specifying the single-shot APIs will be feasible. & .]\n+1 to using single-shot APIs, though I'm not sure I agree that using either info or add (but not both) is simpler. What's the reasoning?\nThe reasoning is: a) it follows the advice of the HPKE spec (quoted above) b) AFAICT, in a single-shot setting, application info & AAD serve exactly the same purpose, so using both is arbitrary/confusing/a potential source of mistakes. (in a multi-shot setting, application info is bound to all encryptions/decryptions done by the same context; AAD is bound to a single encryption/decryption)\nTBH I'm not sure I agree with the quoted guidance here. There is a concept in protocol design known as \"domain separation\", the goal of which is to provide some kind of binding of long-term secret key operations to the context in which they were used. Imagine, for example, that someone used an HPKE secret key for PPM and some other protocol. Suppose further that both are using the empty string for . Then the security of our PPM deployment depends on how the other protocol uses the derived AEAD key, since in both protocols may end up deriving the same AEAD key. One way to avoid creating this attack surface is to pick an string that no other application is likely to choose so that derived keys are guaranteed to not collide (except with some negligible probability). That said, I'd go for changing how we use and . Something like this might be simpler: Let be a fixed string that identifies the protocol. It would be good if this string were relatively long, say the SHA-256 hash of \"ppm-00\". Let encode the task ID, sender/receiver roles, etc.\nI definitely agree that application info should include information allowing domain separation, for the reasons you mention. (Indeed, the current PPM spec includes the strings \"ppm input share\" or \"ppm aggregate share\" in its application infos, presumably for this purpose.) But I think that's orthogonal to the question of whether we should also use AAD, since the application info could also include information other than the \"domain separation parameter\". Picking one example to make things concrete, a client upload request sets its application info to and its AAD to . But it could just as easily set the application info to (i.e. the concatenation of the two previous strings) and not use AAD. (Or alternatively, it could place that string in the AAD and not use application info.) My argument is that it's simpler to use only one of the parameters, since in the single-shot setting that PPM is using HPKE, the two parameters serve exactly the same purpose (i.e. including additionally-authenticated data).\nAs I see it, our goal is to make sure that decryption only succeeds if the sender and receiver agree on the \"context\", i.e., the task ID, sender/receiver role, nonce, extensions, and so on. This way a MITM cannot force the aggregator to interpret an input share incorrectly. Does this sound reasonable to you? From the point of view of this threat model, I don't think it's immediately clear that sticking the context in or is equivalent. For example, if we stuff the context in , then it influences the derivation of the AEAD key; but depending on the AEAD, it might be feasible for an attacker to find a find two keys that decrypt the same ciphertext. (See URL) More analysis will be needed to say for sure whether this weakness in the AEAD amounts to an attack against our protocol. In the meantime, sticking the \"context\" in seems like a more conservative choice to me.\nGiven your concern, your approach SGTM. It sounds like this may be a concern worth raising with the folks working on the HPKE spec, given the advice to \"use the Setup info parameter for specifying auxiliary authenticated information\", but I'm not going to chase this down currently.\nSo I think there's two tasks for this issue: Move task ID from to Use single-shot APIs for HPKE Anything else?\nYep, I think that's it.\n(reopening for now as the fix PR has been reverted temporarily)\nPerfect, great work!"} {"_id":"q-en-draft-ietf-ppm-dap-bf3d867901f989e47d01ed511f428c6184a75ba025340fb86bcc207970f4a759","text":"AggregateShareReq should have an aggregation parameter field instead of aggregation job ID.\nThanks NAME for spotting this!"} {"_id":"q-en-draft-ietf-ppm-dap-83bd53a7e711cf013bb0fed7293d4b3b02a1673d48c04143724dae511a663d61","text":"Align with the latest VDAF draft. This requires two minor protocol changes: 02 added a value called the \"publicshare\" to the VDAF syntax. This is to support Poplar1 (and likely other schemes that need a similar extractability property). To accommodate this, the publicshare field has been added to the Report and ReportShare structures and appended to the AAD for input share encryption. (This may end up being useful for privacy.) 02 added the report count to the aggregate result computation. This is already passed to the Collector in the CollectResp, so we just need to update the aggregate result computation in the draft to match.\nRebased and pushed a minor update: The public share in Report and ReportStore now both have the same length prefix.\nRebased.\nThis will be VDAF-02 or 03, depending on when we need to cut DAP-02.\nVDAF-03 is out, I will send a PR by end of week. URL"} {"_id":"q-en-draft-ietf-ppm-dap-a2a31b7e86e24a541077eeda329646d39df92450907730252caf10a5c7985ad2","text":"And while we're at it, update an obsolete reference to Oblivious HTTP, which has long since been adopted by its WG. VDAF-05 isn't expected until Monday 3/13, but we can get this change ready to go.\nDAP-04 will need some routine changes: [x] Update version tags for HPKE to () [x] Bump VDAF-03 to 05 () [x] Update change log () [x] Run spell check ()"} {"_id":"q-en-draft-ietf-ppm-dap-7a94ceb5c0c454ed38aa194c6d705b425048c2c54e30437ce13eed2443445547","text":"Partially addresses . (We also need to decide if/how to allow a particular PA protocol to use multiple helpers for privacy.) Right now, multiple helpers are used for redundancy in case one of the helpers dropped out. This removes this mechanism from the core protocol.\nSquashed and rebased.\nThat's a good idea,\nHmm... Do we need to decide. Per our discussion on Wednesday, if the clients learn this out of band then we won't need this field at all because clients will get it separately and helpers don't need to know what the other helpers are, right?\nMy takeaway is that the parameters must contain all input needed to run the protocol, including the helpers. And all of this is configured out of band. You could punt everything except for the cryptographic goop out of the parameters structure, forcing clients to know the leader and helper and get their public keys out of band as well, which would be fine. It's just different. I think both work just fine, but I lean towards having params specify everything. It just seems simpler to me. That said... making public key discovery part of the client upload protocol here is more complex, I think. So, .\nSquashed.\nLGTM. Multiple helpers \"for privacy\" seems like it's inherently a per-protocol thing (Prio supports it, Hits does not). If and when we decide to address that, should we move this to the protocol-specific slot in PAParam?"} {"_id":"q-en-draft-ietf-rats-reference-interaction-models-0ff26db6348df57abf3959a40daad47ffa987b048e6f1d3a5d0f989c157bf2f3","text":"Based on URL In Section 1, 3 Disambiguation About “these types of co-located environments”, The previous sentences are about the Verifier and Attester don’t need to be remote. So “these” is weird here, because here is about the attesting environment and target environment.\nSo, can this issue then be closed? Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan"} {"_id":"q-en-draft-ietf-rats-reference-interaction-models-04f270d0a723445d17e5fc1b773981256795f3bbef3b718b2c11d358558264ba","text":"Based on URL I suggest using Reference Values to keep consistent with the RATS arch.\n+1 for \"Reference Values\"\non it Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan"} {"_id":"q-en-draft-ietf-sedate-datetime-extended-ce3bfecaba71d5f0762e3f4cc4e47298853989406f0c6ceccad95f6f9ac15533","text":"URL supports offset time zones, e.g. . For backwards compatibility, this RFC should support them too. This PR also adds clarifying text to several time-zone-related definitions.\nMakes sense. Should I start one? Yep, we added these in Temporal primarily for compatibility with prior art in URL URL In Java (and currently in Temporal) a time zone ID can either be an IANA name or an offset. The ID is an index into a set of rules ( in Java, in Temporal) for converting between exact time and local time. If the ID is an offset, there's only one trivial rule because the offset is always the same. I don't have a strong opinion about the name of offset time zones.\nNAME - Let me know if you have responses to the questions in my comment above, or how you'd like me to change this PR. Thanks!\nI think we still need the mailing list discussion first. If we get to a certain state of clarity by March 18, we might be able to post yet another update of the draft right before IETF week.\nSounds good. I sent mail to the mailing list. Waiting for responses.\nI plan to merge this within a couple of hours, so we have a merged draft to look at in Monday's WG meeting. Since the discussion hasn't concluded yet, I'll edit the merge so it is useful for those discussions; we can edit some more after Monday's meeting.\nSounds good to me. Based on your comments here and on the mailing list, I think we're in agreement on the important points.\nHi NAME - is the merged draft available? If not, I'm happy to make the edits you and NAME suggested here in this PR, but didn't want to step on changes you're making in parallel. Let me know how I can help. Thanks!\nI merged the PR and made some edits in main. I included you in the contributor list; please advise if that is OK, the information is right and whether you want to include any more contact/affiliation information.\nGreat! Your edits look good to me. I may follow up after the meeting with minor text clarification PRs but nothing important. Info looks right. I'm a volunteer not working on behalf of my employer so no need for additional affiliation. Thanks!\nThe current ABNF for the time zone extension does not support offset time zones, e.g. . These zones are supported in URL, which is the prior art driving the bracketed time zone extension format. Therefore, I'm assuming that it makes sense for the standard to also allow offsets in brackets too. Current ABNF: URL Suggested ABNF for, where is imported from RFC 3339: Example of Java usage:\nPlease excuse my ignorance: What is the meaning of ? We'd need some text explaining the semantics.\nThis format asserts that derived timestamps (e.g. add one day) should also use the offset. It's mostly for backwards compatibility with URL It's not recommended for normal use. Heres the explanatory text I added in : >\nI assume that conflicts between the timestamp offset and a bracketed offset (e.g. ) should be handled the same as other conflicts, per discussion in URL"} {"_id":"q-en-draft-ietf-sedate-datetime-extended-485369b363d9e8c4a6e966d2a62e8431c3e5251fcb20e70e6ff8ccb17e9b8e94","text":"We need a name that people will use until they can say RFC9999 timestamps (and maybe even when they can).\n\"Extended Timestamps\" or \"Extended Internet Timestamps\" ?\nRFC 3339 says \"Internet Date/Time Format\". That is what sedate says at the moment as well, but that can't stay the same. So I'm considering moving to: Internet Date/Time Format Extended The alternative .... Extended Internet Date/Time Format sounds like a date/time format for the Extended Internet (EI is actually in use as \"Extensible Internet\", which this shouldn't be confused with). I'd rather call it Fritz, but I haven't found a way to retronym this...\nThanks"} {"_id":"q-en-draft-ietf-taps-transport-security-0ea7f8e388599df4ab6fd61190b8c9ecacf5e4c2087f429852fb0d0061717662","text":"We got another Secdir review: Reviewer: Paul Wouters Review result: Has Nits I have reviewed this document as part of the security directorate's ongoing effort to review all IETF documents being processed by the IESG. These comments were written primarily for the benefit of the security area directors. Document editors and WG chairs should treat these comments just like any other last call comments. The summary of the review is Has Nits Compared to my last review (-09) a lot of text has been removed, reducing the technical details and giving a briefer (but arguably cleaner) overview. I do still have a personal preference of dropping CurveCP and MinimalT as I don't really think these are anything but abandoned research items. I have never seen or heard of these being deployed. I would be more tempted to list openconnect (draft-mavrogiannopoulos-openconnect) which actually does see quite some deployment. If the intent is really to show different kind of API's, than there are other esoteric examples that could be included, such as Off-The-Record(OTR) or Signal, which are mostly encrypted chat programs, but with the ability to generate session keys for encrypted bulk transport (eg for video/audio or file transfer) Section 5.1 \"Session Cache Management (CM):\" which does not include IKEv2/IPsec, even though it does have session resumption (RFC 5723). I guess this is because the section limits itself to \"the application\" that can restart the session. But for IKEv2/IPsec the same could be said. An application that after a long idle period sends a packet, triggers a kernel ACQUIRE, which triggers an IKEv2 session resumption. WireGuard does not have this capability as there are no API/hooks for packet trigger tunnel events (AFAIK) \"Pre-Shared Key Import (PSKI):\" lists WireGuard, but AFAIK it does not support PSK based authentication - only public key based authentication. While I understand the IKEv2 entry (PSKs are used for authentication of peers), I am not sure the ESP entry should be here. ESP does not \"authenticate peers\", unless you call \"being able to decrypt and authenticate packets\" as an instance of \"authenticating peer\" Section 5.2 This states \"This can call into the application to offload validation.\". This \"can\" is only not supported by WireGuard, as all of this is happening inside the kernel. Maybe a note could be useful here? \"Source Address Validation\" - It is a little unclear why TCP based protocols are not listed here. They (implicitly) do source address validation. Perhaps the introduction in this paragraph can state this more explicitly. Eg \"for those protocols that do not use TCP and therefor do not have builtin source address validation .......\"\nOn which protocols to include, MinimalT and CurveCP have different feature sets and interfaces exposed, which I think warrants their inclusion. Things like Signal and OTR indeed have seen very wide deployment, though they're asynchronous, application-layer protocols akin to MLS, which I think is out of scope. I'd also not like to add draft-mavrogiannopoulos-openconnect, if for no other reason than to draw a line in the sand. (We run the risk of indefinitely delaying this while adding more protocols.) On Session Cache Management, this is an explicit interface, and the example Paul mentions, while correct, does not exercise such an explicit interface. On WireGuard and PSK import, WireGuard uses the IKpsk2 Noise pattern (see URL, and URL), in which a responder uses its PSK to additionally authenticate a connection. On ESP and PSK import, we note that importing a PSK can be used for \"encrypting (and authenticating) communication with a peer,\" which remains true for IPsec. (Right NAME I'll address the issues in 5.2 in a PR!"} {"_id":"q-en-draft-ietf-taps-transport-security-edb604cbbde350b25887a1f93423ad48d4525c248774cb5df0930fd160521ab6","text":"WIP addressing . I don't want to pull this as-is: I just want comments on the format and usefulness of this. This closes the circle created by the list of protocol features in section 3, and indicates where we probably want more consistency and completeness in that list so the derivation of common optional features is more obvious.\nStill a WIP. I have a bunch of comments as a result of doing this exercise: Feature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Wireguard is IP encapsulation within UDP, but this is not stated anywhere in the doc. The set of features we list here should be called out or evident from the protocol features subsections under each protocol description. I think there's still some confusion in sections around symmetric authentication vs. signatures, i.e., AEAD-style authentication (\"packets belong to this session\") vs. PKI endpoint authentication (\"this connection belongs to a peer I trust and is not being MITMed\"). This may be a comment on the unfortunateness of the literature using \"authenticated\" for both.\n\nFeature availability for some protocol/feature combinations is not clear from the descriptions in the doc. Can you give examples? Can you please open an issue to address this? See the other open pull request. :-) Where in particular are these things confused?\nNAME ping!\nSorry, am away for the holiday weekend. The ones I have listed with a \"?\", for instance. I guessed on a few of the others, as well (e.g., configuration extensions for MinimalT). The right answer is for me to go back through this table and enumerate all the areas where the text isn't clear, and fix them. I'll file an issue. Will do. I'm not quite sure what I was thinking here. After reading it through again, I think I'm okay with how this is presented.\nNAME can you please update this PR?\nA few more questions: Is there any existing client puzzle/proof of work standardized for TLS or DTLS, or is URL as far as something's gotten? Same goes for IPsec; I can find no standardized version of any of these things. Should algorithms like TLS, DTLS, etc. that support native authentication but also can export a cryptographic channel binding be listed as supporting authentication delegation? My instinct is yes.\nNAME None that I'm aware of! Yes, I think that's correct.\nNAME Will you be able to complete this PR this week?\nYes, though I could use some guidance on the ?'s in the table, if you have enough context to resolve them. Otherwise, I will do some digging.\nNAME IKEv2+ESP/AD: Yes, via EAP(-TLS). IKEv2+ESP/AFN: Yes, via vendor and configuration payloads. SRTP+DTLS/CM: No. ZRTP: Omit, since it's a variant? WireGuard/CM: No, connections are bound to IP addresses. WireGuard/SC: No, session resumption is not supported. WireGuard/LHP: Yes, for transport packets only. MinimalT/AD: No. MinimalT/AFN: No. MinimalT/LHP: Yes. CurveCP/AD: No.\nRFR.\nShip it.\nIt's probably easier and more consumable to encode the matrix of protocols providing a particular feature as a table.\nI'm assigning this to you, Kyle. Will you have time to take a stab at it?\nYep, I'll work on it this afternoon."} {"_id":"q-en-draft-ietf-tls-ctls-baf94badb4711777a705aae2874ed5be318dd50cd20221d6084e0dfdb7edecc9","text":"everything else is self-delimiting and is just a source of confusion (5) Replace PSK TODO with an open issue marker (6) use varints instead of remainder of message. Got a bit carried away"} {"_id":"q-en-draft-ietf-tls-esni-0b41aae9c3e52c10d4130fb5fbae3591bacaaabdea798402c7b5796aba133940","text":"Since OCSP is the server name, ESNI should probably enforce OCSP stapling (similar to ) or any other way of encrypted OCSP checks. Ignoring OCSP status checks entirely makes it hard to revoke a certificate, even if it's only valid for a few months.\nNAME what behavior do you expect here? Would clients fail hard if server don't send stapled responses? It's good to view ESNI as part of the leaky boat problem. It plugs one hole, but not all of them. Implementations probably should do something about cleartext OCSP if they want all holes plugged.\nThank you for the changes. LGTM."} {"_id":"q-en-draft-ietf-tls-esni-433564eb5fd093a5ea42c2b0210108a85ec53006646801203af261c5bb505b94","text":"I got a little tripped up on this point! Maybe you have a better way to explain it.\nThanks, NAME"} {"_id":"q-en-draft-ietf-tls-esni-cb6f3665c13a427d50195b815ecb45a81055e3de22ef25d4d42ac0257d4e8cb8","text":"It is an error for the client to offer ECH after HRR but not before. Likewise, it is an error for the client to offer ECH before HRR but not after. This change ensures the server aborts in this case.\nSuppose we hit one of the messy HRR cases (URL), where the HRR is valid for the outer ClientHello, but not the inner ClientHello. The client then cannot construct a valid inner ClientHello. How would that square with this text? Are you envisioning the client just generates a random ECH extension, or...?\nI don't think this is a new problem. I see the PR as clarifying existing behavior. Can you please propose text for the HRR nightmare in ? :-)\nAh, you're right. Sorry, I misread this PR as introducing this requirement. In principle, we only need the server to require ECH in CH2 if it decided to accept ECH on CH1. That would allow the client to not send ECH the second time, but maybe we don't want to do that, depending on what we do for ... Do we know what we want to do for ? Simplest seems to be saying clients MUST align them. (Though how does that work with later HRR-sensitive extensions? Are funny ECH-adding proxies a thing?) Or maybe it's just a SHOULD? Or maybe we come up with some other protocol trick so ECH acceptance is visible at HRR (which would probably reopen the don't-stick-out can of worms yet again). Of these, only the MUST avoids collision with this PR's (existing) client requirement.\nYeah, that's my preference. This logic is already complicated enough. The primary pain point I see is enumerating (and prescribing) what is a HRR-sensitive field.\nSame, with that pain point being the big question in my head. Whether an HRR is valid for a ClientHello isn't an implementation decision, so in theory it's enumerable. But more extensions may come in later. And in the other direction, even a very strong SHOULD means we need to pick a client behavior for the weird case and allow for it in server behavior.\nRegardless of where we land on , this PR seems appropriate to me.\nIn a normal ECHO handshake with HelloRetryRequest there are 4 ClientHellos sent: ClientHelloOuter1, ClientHelloInner1, ClientHelloOuter2, ClientHelloInner2. Encryption of ClientHelloInner2 is bound to echohrrkey so the server must have the ECHOConfig used for ClientHelloInner1 in order to process ECHO the second time. And by the time the client receives encrypted handshake traffic it is not useful to signal if ClientHelloInner1 was successfully decrypted, what matters is to know if ECHO is globally successful or not. So one aspect that I find not explained enough is whether the server replying with HRR should use ClientHelloInner1 in the transcript, or instead always takes ClientHelloOuter1 (data on the wire) regardless of the first decryption status. The ECHO status accept/reject can be conveyed based on distinction ClientHello2 inner/outer only. This point impacts the number of transcripts the client may have to try, i.e. can be sure the combination (ClientHelloInner1, ClientHelloOuter2) is never valid.\nHi NAME the spec has changed quite a bit since this question was first posed. In particular, we've landed , which changes the server behavior so that it provides an explicit signal of ECH acceptance in its SH. Hence, no more trial decryption. I'm wondering if your question still applies? I believe what to do in case of HRR is well-specified at this point. There is no such ECH accdeptance signal provided in the HRR, so to determine if ECH was accepted, the client must wait until the SH. Until that point, the client needs to compute two transcripts in parallel: one assuming the ClientHelloOuter was used, and another assuming the ClientHelloInner was used.\nI don't think resolves this. If the client has different HRR-relevant preferences (key share and cipher suite) between CHInner and CHOuter, I believe HRR is still a mess because the HRR message doesn't contain a signal for whether ECH was accepted. Though I suppose it is a little less complex now that you don't need to go as far as trial decrypting. Different HRR-relevant preferences means the HRR may be good for one ClientHello but not the other, yet the client needs to manage that state and defer the actual error-handling to when ECH acceptance is known. An option where the HRR message included an ECH acceptance signal would close this, but then we have sticking out woes. The draft touches on this, but somewhat vaguely. URL We've been intending that our implementation would always match inner and outer preferences for HRR-sensitive fields, to avoid this case. But having this odd complexity cliff hidden in the spec is poor. I'd advocate we either require or at least strongly recommend clients do this.\nsuggests to add the following text to the HRR section: after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If the client-facing server accepts ECH for the first ClientHello but not the second, or it accepts ECH for the second ClientHello but not the first, then it MUST abort the handshake with an \"illegal_parameter\" alert. I think this solves the problem, at least partially. It ensures that the HRR path aligns with ECH acceptance/rejection.\nThat doesn't address the problem. This is about complexity for the client, not the server. Yes, the server needs to enforce consistency between the two modes, but it is easy for it to do this. Imagine you're the client and your CHOuter has: keyshares = {X25519} supportedgroups = {X25519, P-256, P-384} ciphersuites = {TLSAES128GCMSHA256, TLSAES256GCMSHA384} And your CHInner has: keyshares = {P-256} supportedgroups = {X25519, P-256, P-384} ciphersuites = {TLSAES256GCMSHA384, TLSCHACHA20POLY1305SHA256} Now consider how you have to respond to each of these HRRs. Remember that at the time you process the HRR, you don't know if ECH was accepted. ciphersuite = TLSAES256GCMSHA384; keyshare = P-384 ciphersuite = TLSAES128CCMSHA256; keyshare = P-521 ciphersuite = TLSCHACHA20POLY1305SHA256; keyshare = X25519 ciphersuite = TLSAES128GCMSHA256; keyshare = P-256 ciphersuite = TLSCHACHA20POLY1305SHA256; keyshare = X25519 1 is valid for both ClientHellos. This is the easy case and you can compute new CHInner and CHOuter values. 2 is valid for neither ClientHello. If you can detect this, you can error immediately. 3 is valid for neither ClientHello, but for messy reasons. The key share is valid for CHOuter but the cipher suite is not. The cipher suite is valid for CHInner but the key share is not. (Recall that a key share in HRR is allowed if it is in CH.supportedgroups and not CH.keyshares. If it were in CH.keyshares, you shouldn't have sent HRR.) 4 is valid for only CHOuter, so you can't error. It is not possible to compute a CHInner, so I guess you drop the extension? But you still need to make a note to raise an error later if you see SH which claims it did accept ECH. 5 is valid for only CHInner, so you can't error. It is not possible to compute a CHOuter, but you have to, so I guess you send something garbage? But you still need to make a note to raise an error if you see SH which claims it rejected ECH. This is a huge mess. The client can avoid this mess by always matching keyshares, supportedgroups, and cipher_suites between CHInner and CHOuter. That means it can process HRR without knowing which CH to use. It is not obvious in the spec that you should do this, and the spec allows you to not do this.\nI see, I wasn't thinking about client complexity. Thanks for laying out the various edge cases. I would favor being stricter about how the keyshares, supportedgroups, and ciphersuites are chosen. At the very least, we should guide implementations towards ensuring they are the same in the CHInner and CHOuter. An alternative would be to add an ECH signal to HRR. We can't do the same trick we did with the URL, as this would stick out. But maybe there's another way? I thought of using HRR.sessionid, but that could get messy.\nI think there is still ambiguity in the design about what CH is used by the client-facing server in its own transcript when forwarding CHInner to the backend server. Or else the combination (ECH accepted before HRR, ECH rejected after HRR) can be explicitly forbidden, but I don't see why/how.\nIncidentally, this is addressed by URL This adds the following to client-facing server behavior: after. Likewise, it is an error for the client to offer ECH after the HelloRetryRequest but not before. If either of these conditions occurs, then the client-facing server MUST abort the handshake with an \"illegal_parameter\" alert. It's not hard for the client-facing server to enforce this. It just needs to remember if an HRR was triggered and whether ECH was offered in in the first CH.\nOk that will do it. Assuming \"ECH not offered\" is same as \"ECH not accepted\", due to greasing.\nWith merged, I wonder if the ESNIKeys struct should just be renamed to something not specific to ESNI, but something more like \"generic TLS config in DNS\" so that it can be more cleanly reused by unrelated features. \"ServerConfiguration\" comes to mind from the early TLS 1.3 days.\nIf we were to re-use these keys for other purposes, that seems fine. However, I'm not sure we want to do that. Semi-static keys, for example, should ideally be separate.\nThe keys yes, but I imagine the extensions field can hold \"arbitrary\" data, so while the keys would be ESNI-specific, you could also include extensions that have nothing to do with ESNI.\nThinking about this some more, this would also require changing the \"_esni.\" prefix to something else. In any case I can see how this might be out of scope for this spec. In the end it's a matter of deciding whether future TLS extensions that need similar DNS records, should be able to reuse the same structure (so no additional TXT record with its own prefix would be required), or if they should define their own structure (and have a separate TXT record). I'm happy to close this if it goes too much out of scope.\nRight, I think it's verging on being out of scope. What do others think? NAME NAME\nMy preference goes to keeping the name as-is, considering the fact that the role of ESNIKeys is (at least for the moment) to negotiate properties between the client and the fronting server (not the hidden server), and that the only property we need to negotiate between the two is the information necessary for ESNI protection. It is true that the DNS record can covey properties related to the hidden server (as you know, I've argued for using it to carry the server certificate chain). But IMO that's a change of concept, and I prefer keeping the concept simple for the time being.\nOk."} {"_id":"q-en-draft-ietf-tls-esni-79cbf1589463581d3c421eaf77480a48703da5b86509f985299adbb7c1a811af","text":"The intent was that the server always sends the retry keys (it cannot distinguish stale keys from GREASE when the client is actually trying to connect to the public name), while the client just ignores them, but I did a poor job of describing this, so add a key sentence. This addresses .\n(EDITED.) To recap the discussion on , your point is that the client-facing server needn't attempt to distinguish GREASE from non-GREASE based on outer SNI, because if the outer SNI != public name, then the client knows to ignore the retry config and continue as usual. I think this is weird because if rejection was the intent, then the client is supposed to abort with \"ech_required\". So the server only knows that non-GREASE was intended once it receives this signal.\nI mean, yeah, that's how this was designed and is what we merged to the spec. :-) This formulation, in particular, does not break if the client tries to connect to the public name (it's a perfectly valid URL) without knowing the ECH config (maybe it the HTTPS record didn't get through Do53) but still sends GREASE (because the intent is that every client can GREASE). It seems to me breaks this. Do you have an alternate formulation which keeps this working? Connecting to the public name is especially important if the public name is a non-throwaway domain. The only requirement on the public name is that it's some name which knows about and serves the client-facing server's config. (I.e. this particular backend server is the same entity as the client-facing server.) doesn't relax this requirement, since the public name continues to be the entity trusted to replace, and possibly disable, ECH on config mismatch.\nI don't see this as an issue. If the client sends GREASE, it doesn't expect to use ECH, so it won't abort. The text here only requires that the client check the extension if present, and ignore it if absent.\nI don't have an alternative formulation. I was evnisioning the public name only being verified on the ECH rejection path, or when neither real nor dummy ECH was offered to the client-facing server. I'm fine with rejection not being known until \"ech_required\". I support this PR as the resolution to .\nNAME and I were sitting down and thinking and we realized the following scenario can happen: URL is on URL, which supports ECH URL has an HTTPS record, an A record, AAAA record pointing at the IP for URL A user of a browsers latest release that supports ECH and greases the extension is using a resolver that doesn't resolve HTTPS records This results in an ECH request being made with URL, and a rejection of the ECH. So now the question is what the client should do and what should servers expect? Try again, with the outer SNI the one matching the one in the new config Use the connection authoritatively for URL since the connection has already revealed the issue.\nThe intention was the latter, but the text could probably be clearer. I don't think there's much point in trying again since the name has already been revealed. See URL\nNAME is this still an issue now that is merged?\nI think the client behavior remains equally unspecified. In particular the lengthy argument NAME and I had about server enforcement of the client behavior I think isn't changed.\nCare to file a PR, presuming the latter behavior is intended?\nA proposed resolution is here: .\nNAME a proposed fix has been merged. See .\nI think this text works."} {"_id":"q-en-draft-ietf-tls-esni-0ba7625b962dc6393ecda3646532580fd0dd09c4d578c44f4a01100d0ecd4115","text":"We use a mix of sentence case and title case in the document. The RFC style guide says to use title case. This PR aligns with that. URL Additionally, the ESNI criteria document has changed slightly since the comparison against criteria section was written. I've aligned the titles with the final wording in RFC8744. RFC8744 has also merged \"Proper security context\" and \"Split server spoofing\", so I've done the same here. \"Support Multiple Protocols\" also now talks about transport protocols and ALPN, so I've added a few words about that."} {"_id":"q-en-draft-ietf-tls-esni-d96ea19296b687edc4602ff88ed856d6c729434e5ecfe072569ea1b69f83788e","text":"The spec describes an optional trial decryption mode, but most of the text contradicts it. This PR fixes the text up and tries to unify their processing models. In doing so, it flips the error handling from decrypterror to fallback to be consistent. This, combined with the ClientHelloOuterAAD change, plugs an active sticking out attack for free. It also makes easier because GREASE configid collisions with real configids are now harmless. Along the way, NAME and NAME observed that we're not authenticating the ECHConfig at all. (It used to be authenticated by way of the configid, which was a digest.) Instead, just pass the ECHConfig into the info field of the HPKE context. They also noticed the server rule about PSKs on ClientHelloOuter didn't work with GREASE. Instead, we can simply remove it.\nThis matches the acceptance signal length, and we've already accepted that probability of collision."} {"_id":"q-en-draft-ietf-tls-esni-8824ed4577342af9ee46b6aa8efcf4649bf095eb199dada318802f133a0db4af","text":"[Edit: deleted \"Fixes\" line for ]\nFrom , we also talked about IP addresses and such. Do we still want to do that? Whether the client validation is SHOULD or MUST (TBH I kinda prefer MUST, given the implications on private services... if it's a SHOULD, we should talk about why so implementations can make an informed decision), we do also need to say what values are valid for the server.\nOK, switched to a MUST. Added IP addresses as well. I'm calling out to for IPv4/IPv6 parsing. I'd like to avoid any ambiguities that might creep in if I were allowed to define the grammar, like allowing octal numbers in an IPv4 address, e.g. != . For hostname parsing, I'm borrowing from so we can parse IDNs.\nI think that is a mistake.\nWhy is that?\nI think IP addresses are a mistake because they can introduce parsing ambiguities (as pointed out above), and sometimes those cause problems (URL). Also, SNI is really limited to hostname (as NAME has pointed out before, referencing AGL's post), and having the two mismatch in their capabilities bothers me.\nAgreed that the capability mismatch between SNI and ECHConfig.publicname is odd. However, there is already a capability mismatch between ECHConfig.publicname and names allowable on server certificates. At least in theory, the server's certificate might not have a hostname. That being said, I don't have a strong opinion either way. If we do decide to disallow IP addresses, I think we'd still want the client to look for them so it can reject those ECHConfigs that use IP addresses. This is about stopping wacky values from bubbling up to the application.\nThe client sending the ECH or the server receiving it? Current TLS stacks don't seem to check, on either side. (At least OpenSSL and its major forks)\nI was referring to the client, but yeah, scratch that idea. If public_name can only be a hostname, the client should only validate that it looks like a hostname.\nI don't think the capability mismatch between SNI and ECHConfig.publicname is a fatal. We can move ECHConfig.publicname up an abstraction level. I think we should do this whether we allow IPs or not. The public name isn't \"put this in ClientHelloOuter.servername\" but \"the name you are trying to connect to in the outer handshake\". In particular, this encompasses both ClientHelloOuter.servername and certificate verification. To fill in ClientHelloOuter.servername, you use the rules in RFC6066: servername is the name you are trying to connect to, provided it's a DNS name. Under this model, it is perfectly coherent for \"the name you are trying to connect to\" to be an IP address, if we want to do that. If we wish to do that then, yeah, we'll need to spell out the syntax here. That syntax clearly does already exist (URLs, etc.), but not in TLS. I don't particular care whether we do it, though. How about this: Right now, IP addresses are implicitly not okay by way of CHOuter.servername = ECHConfig.publicname. So let's not introduce them in this PR and merely iron out the DNS name validation. This PR adds to, but does not We then rework the text to redefine public_name as above. This is a prerequisite to allowing IPs, but I think it's worth doing either way. We decide the IP question (interim?) and with the result. (Sorry, I think some of this confusion is my fault. I filed two related, but separate, discussion points in , which was confusing. My comment in URL was most in reaction to \"; that we shouldn't close it out without resolving both points.)\nI think I'd mostly be against supporting IP addresses in ECHConfig.publicname. I suspect including an IP address in URL would break some things. The comment below is from OpenSSL's ssl/statem/extensionssrvr.c: / Although the intent was for servername to be extensible, RFC 4366 was not clear about it; and so OpenSSL among other implementations, always and only allows a 'hostname' name types. RFC 6066 corrected the mistake but adding new name types is nevertheless no longer feasible, so act as if no other SNI types can exist, to simplify parsing. Also note that the RFC permits only one SNI value per type, i.e., we can only have a single hostname. / Separately, we could land ourselves with yet more confusion related to the address to which one ought send a ClientHello, (the address in the ECHConfig or in the URL or the A/AAAA of one of the names involved, and which to prefer if some things have DNSSEC etc etc.). Cheers, S.\nNAME Right, I think it's unambiguous, whatever we do with IP addresses, they won't go in ClientHelloOuter.servername. Any use of this, if we want it, would be for public name verification, not the servername value. See my comment above with how to think about public_name.\nSGTM, folks. Just dropped the parts about IP addresses.\nLooks like I was a little overzealous removing the IP address text. We still need to check that publicname is not an IPv4 address because dotted-decimal IPv4 addresses would slip through the cracks of hostname validation. This is bad because a malicious/misconfigured DNS would cause the ECH client to break 's rule on the contents of \"servername\": (Re-added this IPv4 check in the last commit.)\nI would rather make \"validate the host name\" be a SHOULD and remove all these requirements.\nECH implementations not validating that string risks surprising behavior for a lot of applications, and potentially more attack surface to private services than before. Have you looked at the discussion in issue , in particular: Sadly, the thing that makes that work (logic earlier in the application) doesn't apply here. See URL\nCan you explain your concerns in more detail? I think the application should be responsible for validation.\nThe application should never see the value of ECHConfig.public_name, right?\nSee , specifically URL and URL Well, there are several things going on here: First, different systems may differently divide the \"TLS\" part and the \"application\" part. The spec mostly captures behavior of the overall client system, because that's all we can usefully talk about. (For example, ECH specifies a retry flow that involves making a new connection. At the level of many TLS libraries, including OpenSSL + derivatives, the caller makes connections. So instead the TLS library can expose APIs and document the caller's responsibilities as part of enabling ECH. Other interfaces may be create their own connections and handle this internally.) So whether it's the application or TLS, at the implementation level, isn't a hard deciding factor. Second, with that said, yes, I expect that the majority of TLS application interfaces will treat the ECHConfigList as an opaque blob. That is a good division of responsibilities because the TLS half already needs to process individual ECHConfigs in the ECHConfigList and select one based on its internal capabilities. Having to thread through application-specific DNS validation seems like it'd be complex. (But this is just my view as an implementor. The spec merely describes the overall behavior of the client system. How you choose to layer it is an implementation decision.) Finally, there's the question of whether TLS as a spec should have opinions here or if it should punt to some upper layer spec profile, as we did with 0-RTT. I don't think that's warranted here. RFC6066 already decided what can go in server_name, so TLS already is plenty opinionated on the shape of strings meant to be DNS names. Thus, IMO, the simplest and most practical story is to have TLS handle this, rather than thread it up through all the layers.\nI am willing to accept that I am in the rough here.\nAs noted in URL, I don't think we should be putting IP addresses in .\nIn the interest of dealing with one problem at a time, this PR no longer attempts to spell out how to recognize IP addresses. Roughly: The client MUST validate as a LDH hostname. The client SHOULD exclude IP address literals, which may come in many strange forms, or else it may send a non-compliant \"servername\" extension. If we care to make excluding IP addresses a MUST, let's work out the details in another PR. Same goes for if we want to explicitly support IP address literals — we still need to validate them to protect server applications from receiving IP addresses in non-standard notations, e.g. octal.\nI think there are a couple things going on here: ECHConfig.publicname, as currently written, is described as what goes in CHOuter.servername. That wasn't right and we should have made it outer reference identity. I believe NAME is going to do a separate PR after this goes through. SNI or reference identity, we need to define what the type of the thing that goes in ECHConfig.publicname and what validation the client does. In particular, without validation, it gives the network more control over what goes into CHOuter.servername than would otherwise be the case. Normally, in a typical HTTPS client, TLS can just assume the hostname has passed whatever checks earlier in the stack (URL parsing, DNS lookup, etc.) and so checks in TLS are not load-bearing. The public name breaks this because the TLS layer is fabricating a hostname on its own. So we should have TLS validate it before running too far with it. This PR addresses (2).\nFollowing up on NAME comment, I was indeed planning on clarifying that ECHConfig.publicname is a reference identity, and planned on renaming this to something other than to match. Basically: may be either a name or IP address, validated according to the rules in this PR, and clients will use that identity when constructing the ClientHello per the normal rules. (If it's a name, put it in \"servername,\" else don't put it in that name.) I think we can probably land this PR as-is. Is everyone else OK with this plan?\nIt's a nit, but I don't think I've ever seen a use of the term \"identity\" that didn't eventually cause confusion. I think public_name was better TBH. Sorry if I missed the discussion but: Are one or more IPv4/IPv6 addresses allowed and if so with what syntax? If only 1 is allowed, why? (Syntax still needs a reference.) I don't know how an IP address here ought be handled if it differs from those in the SVCB address hints fields. (Not to mention the origin's A/AAAA records.) Lastly, I forget why we even want an IP address there at all. Thanks, S.\nI assumed there's at most one identity in this field, be it a name or address, but NAME can clarify. Also, I think the identity here only pertains to how the client-facing server is authenticated. If a server uses an IP-address certificate for the rejection path then it might specify the single IP address of that certificate. (Clients don't use SVCB hints for this, so they're orthogonal.)\nThere can be many SANs even if those are IP addresses. So I think you mean that you want exactly one IP address here and if one is here that MUST match one SAN iPaddress from the cert? Personally I don't think that's useful enough to support but maybe I missed why it was really good. TBH, I doubt implementers will notice that orthogonality. And those deploying even less would be my guess. If one is allowed put a single IP address in an ECHConfig and others in SVCB hints then someone needs to say what to do when those aren't the same. S.\nTrue, but I'm just commenting on parity with what's there right now. (ECHConfig.public_name is just one name.) We may change that later if people want it? I don't see a reason to, though. Yeah, that's what I'm thinking. Hmm, yeah, we may be able to clarify this, but given that we don't discuss anything about the SVCB record (beyond the config) currently, it may not be needed.\nCurrently, this PR explicitly states that the client SHOULD ignore the ECHConfig if its contains an IP address. Unfortunately, what constitutes a textual representation of an IPv4 address is pretty underspecified, so I didn't manage to come up with a foolproof validation algorithm, otherwise I would make it a MUST. If we want to allow IP addresses, let's figure it out in another PR.\nSo I don't think that we need to worry about IPv6 here. That will never match LDH. The IPv4 thing is unfortunate. I think that the risk is that the public name is passed to a certificate validation library that subsequently treats the name as an IP address (matching against an ipAddress SAN rather than dNSName SAN) when doing fallback process. If there is any potential for confusion about this string (or look-alikes if we consider the octal mess) then it might happen that there is variation in what certificates are considered OK. That leads to a reliance on the certification authorities being careful about issuance for names that look like IP addresses. We don't want that. Either way, I don't like \"SHOULD\" here. It retains the potential confusion. If one client component is sloppy and doesn't bother other components are then forced to contend with that choice. I think that I've come around to NAME viewpoint on this. It's reasonable to require a domain name for fallback in all cases, even where a server is identified by IP address alone. I think that means prohibiting IPv4 addresses, just to avoid the potential confusion. That is, if the name contains exactly 4 labels that are only digits, then the record is invalid. IPv6 addresses will look after themselves as they aren't valid LDH A-labels. Note: I don't know if there are rules that might allow 172.0.2.2045 to be registered, which is clearly not an IP address. I believe that the main protection we have against valid DNS names appearing to be IPv4 addresses is the gTLD registration rules (from URL Section 2.2.1.3.1 clause 1.2.1 this is currently assured as only a-z is permitted from the ASCII range).\nNAME can you drill into this? I must be misunderstanding you (and maybe the problem), but are you suggesting that strings which look like IP addresses might be parsed incorrectly by the thing validating certificates against such look-alike addresses? They also have to be careful about names that aren't actually domain names (\"DROP TABLES\" or whatever). How is this a substantially new requirement? Prohibiting IP addresses here seems overly restrictive. Given that it doesn't seem challenging to support, and does not, to my knowledge, introduce any new requirements for CAs or certificate validation APIs, I'd like to hear more reasons why we ought to definitely rule them out. And as NAME says, we should hash this out in a separate issue and PR. In the interest of making forward progress, I'm going to merge this PR as-is, and we can discuss more in .\nIn most cases, I don't anticipate problems: there will be one certificate validator used during fallback and that will make a single determination. IP address lookalikes in that case probably won't be a problem. However, in cases where there are multiple components involved, there might be IP address lookalikes that pass one validator but not another. That might be used to attack clients.\nGiven how IPv4 literals can get in some applications, I suspect the TLS library classifying and rejecting IPv4 literals in a way that ensures the calling application always interprets it as a DNS name is hopeless. I'm now inclined to say we should: Do and say publicname is exclusively DNS. If anyone adds IP in the future, the expectation will be a separate extension and thus not require any parsing to pick semantics. The exclusively-DNS publicname field is validated according to DNS rules and leave it at that. That at least avoids completely unbounded DNS control on the ClientHello. Concede that we can't reliably exclude IPv4 literals at this layer. Call this out and say that clients MUST NOT validate the string as IP addresses. If your verifier has separate APIs for configuring DNS vs IP names for subjectAltName, feel free to pass the string as a DNS input and move on. If your verifier's API takes a more URL-like hostname, you should use whatever logic you used to distinguish DNS vs. IP and fail the verification if you think the string is an IP. (Effectively that verifier believes \"DNS:1.2.3.4\" is never valid because it has no way to spell it.) This does mean the DNS can cause ClientHelloOuter to violate the RFC6066 prohibition on IPv4 literals, but since RFC6066 failed to specify what syntax it was prohibiting, the meaning of that rule is somewhat unclear.\nIn , there's been a lot of discussion around how and whether or not IP addresses should be allowed as a reference identity of the client-facing server. This issue tracks sorting this particular question out. NAME suggested in the case the reference identity is an IP address. (Whether or not this address is in the ECHConfig is separate.) NAME suggested . (Whether or not this allows the outer SNI to be empty is separate.) So there seems to be two questions we need to sort out, in relation to : Should the ECHConfig be allowed to supply IP addresses? And if so, how do client stacks validate them? Should clients be allowed to validate servers using IP addresses as a reference identity? (If yes, then the ClientHelloOuter.SNI must be allowed to be empty/omitted.) I think (2) should be 'yes', given that's currently possible today introduces no new complexity. I think it should be feasible to spell out the requirements to make (1) feasible, so I also think that should be 'yes'. What do others think?\nYes and yes. RFC 8738 using reverse-IP notation, which seems like a halfway-reasonable option. An \"empty\" public name seems a little more flexible (doesn't require servers to carry certificates for each other's IP addresses), and ought to be easy enough to support on clients.\nNAME to clarify this: You're suggesting that if ECHConfig is empty then the client should use the IP address of the client-facing server as the reference identity, right? That seems like a reasonable way around the language we might need to specify what is a valid address in ECHConfig. (If the reference identity is not empty, then it MUST be a name, otherwise it's the IP address, and clients should connect and validate based on that identity accordingly.)\nNAME Correct. NAME previously raised a concern that this could complicate client implementations: the TLS stack can no longer make use of a connected socket given solely on the ECHConfig and inner reference identity; it also needs the remote IP in some cases. This is true, but I think the remote IP is essentially always \"at hand\" so perhaps it can be passed in unconditionally without too much difficulty.\nI don't think that's true when the client is using a proxy, depending on how much of DNS is done on the client and how much is done on the proxy. If the client does the SVCB lookup itself but asks the proxy to look up TargetName, as the SVCB draft , the client won't have the remote IP of the client-facing server available. (Are there weird networks that rewrite addresses at the DNS level? That would also break with this idea.)\nTo try and sharpen NAME point: will the client always have access to the client-facing server IP? I don't see how this won't always be true, since the client connects to something, but maybe I'm misunderstanding the proxy scenario.\nNAME is correct about the behavior when using a SOCKS5 or HTTP CONNECT proxy in domain-oriented mode. The proxy performs A/AAAA resolution and TCP connection establishment, and does not provide this information back to the client, so the client doesn't know which server IP it is using. That does suggest that \"empty\" won't work.\nOh! I see. The proxy is the one establishing the transport connection.\nThere's probably at least three modes here, if not more: Proxy gets the origin DNS name and takes over the whole DNS lookup. Client doesn't do SVCB at all. The only way to salvage SVCB-based features is to invent a header for the proxy to tell the client what SVCB path it followed in the CONNECT response. Client does SVCB lookup and gives TargetName to the proxy. Client does the whole lookup and gives the IP address to the proxy. The one that breaks empty names is (2). Though, given that proxies today are often used to access internal services (i.e. without the proxy, you may not even be able to resolve the name), I expect (1) to be the default for existing schemes. Whereas (3) might make sense for other kinds of proxy use cases. (I'm actually not entirely sure when you'd want (2) despite the spec recommendation... maybe if you're relying on the proxy to rewrite IPs, but the proxy needs to a DNS name to do so?)\nThis is presumably not the right forum for this topic, but mode (2) is the one that gives you correct Happy Eyeballs, geo-DNS, etc.\nI seem to recall some DNS IPv6 transition thing that maybe has that property. S.\nYes, there are transition technologies that rewrite addresses. Clients get a v6 address because they only have v6, but the server address is v4. As for the general proxy problem, there is no value in any design that does not involve the client having the ECHConfig. It can't use the public IP (because it doesn't know it always), but it can still populate servername according to the value of publicname. I might have missed something there though. From David's , option 1 is a bit silly (it adds unnecessary latency), but it also gives you the best locality if your goal is to use the proxy egress as the client network location (option 2 as Ben only works if you care about the network location of the client, at which point you might as well not use a proxy). The only difference between 2 and 3 there is whether the proxy needs to perform its own name resolution, which might differ from the client (and cause problems as a result; but that's mostly just a case of insisting that clients don't choose option 2). I don't understand why it's so hard to require the use of a domain name. It shouldn't be \"\". Adding other forms of reference identity adds complexity to what is already a morass of horrifying complexity, so we should insist on justification for every addition. And I'm not seeing any concrete arguments in favour of the complex option. So my answer is No to both questions.\nI don't agree with Martin's characterization of the proxy behaviors, but I don't think this is the place to discuss that topic anyway. To me, the main advantage of supporting IP reference identities is for origins that are not using a large-scale hosting provider, and thus cannot rely on a widely shared . Domain names are typically more closely linked to an organization than IP addresses (especially with VM hosting), and more expensive to change. An IP address ensures that the client does not populate the outer SNI, and supports a configuration where the default certificate only contains an IP address SAN. This minimizes the information visible to an adversary unless they can also identify the service that runs on this IP address (e.g. by scanning the whole DNS). An alternative to IP address in would be a flag in the ECHConfig indicating that the specified reference identity is in the default certificate (which is implicitly true for IP reference identities). This would allow clients to omit the SNI, improving small services' privacy against a passive adversary, but active adversaries could fetch the default certificate to learn the .\nFor everyone's consideration, we now have as an alternate solution to this problem (thanks to NAME which does two things: (1) avoids overloading a single field for two types of values (names and addresses), and (2) punts IP address validation to a future change. While I sympathize with Martin's view, it seems overly restrictive to just prohibit IP addresses outright. This change allows us to sort that out in a future change, and should make everyone happy. Thoughts?\nRe , the idea is largely me going \"oops, I'm sorry for the mess\". :-) When I filed , I was mostly interested in the client validation. But most places with DNS strings have a semi-overlap with IP literal syntax, and RFC6066 already prohibits of IP literals. (Without declaring an actual syntax being prohibited!) So it seemed hard to resolve those questions without going one way or another on IPs. So I brought up that question too as a seemingly minor (hah) side question on the main issue. That side question has turned out to be way more messy than I anticipated. Since there isn't a strong need for IPs as public identities, but also unease to completely close that door, I propose we tweak the encoding so it's easy enough to add later (reuse the extension mechanism we're already putting in), but not actually bother in this draft.\nIsn't it the case that the extension scheme would allow us to add IP addresses later, even without ?\nThere's lots of different ways we can accomplish this, and it'll come down to personal taste. For example: Make publicname optional in the main ECHConfig (and waste the two byte length field), and allow an extension to specify the IP address. Move publicname to a mandatory extension (as 426 does), and allow future extensions to specify addresses override the name. etc. Given the bikeshed that's happening here, it seems prudent to at least separate these things (names and addresses). If we can agree on that, then we just need to decide where we put those bits. does the job one way. If someone wants to propose an alternative PR to put the bits somewhere else, please do so!\nIf we're really talking about a bikeshed, then the \"no change\" option works for me. It's fewer bits on the wire (by 4), it is less complicated to enforce it being mandatory, and we already have it implemented. I do think that public_name being optional has some impact here. It means that you don't have a reference identity for fallback handling. I think that's probably legitimate. There might be cases where fallback is not needed.\nHmm... how do you see this working if a future extension wants to specify IP addresses as a fallback but the publicname is not allowed to be optional? I agree that there will be cases where fallback is not needed. DNS-SD is one of them. And in that case, removing the publicname field (or allowing it to be empty, or whatever) will be needed. I think addresses that pretty nicely.\nAhh, I misunderstood. It's not that the extension is mandatory to use, but it is mandatory to understand when present. Let's be really crisp about the words we use. Of course, an empty value is an equally good way to indicate that the value isn't operative. It's one byte in the less-common case to save 4 bytes in the common case. The question occurs to me: is there any possibility that you could have both a publicname extension and this hypothetical ipreference_identity extension? What would that mean? I want to be really open about my priors here: I think that extensibility along this axis is an anti-feature for ECH. The mandatory-to-understand extensions are part of what really confirmed that for me. I think that the current format would be improved by removing the extensions block.\nUnclear -- I think it'd be up to the hypothetical extension to define how those things interact. I hear you. If we didn't care about IP addresses for the fallback certificate, much of this would be simplified. (And admittedly the desire to remove the extensions block would increase for me, too.) I'm going to take this to the list, since we don't seem to be getting anywhere without consensus on that point.\nThe draft already defines a notion of \"mandatory\". Although does both (let's call the other one \"required\"?), which I think is the minimal encoding change because... I don't think we should open that can of worms. Even though not having a fallback flow is really the server's decision, I think the client has a stake in this too. It's useful to nudge the ecosystem towards things that work well. If a server deploys without fallback, underestimating the predictability of DNS caching, it will break rarely in some clients. That sort of thing is bad UX and increases the pressure for clients to adopt insecure fallbacks. Unless there is a clear need, I don't think we should allow that. (Otherwise not setting it up is deceptively attractive for a naive server since they make fewer decision.) The extensibility story for ECH is a little funny. You send an ECHConfigList and the client ignores any ECHConfigs it can't understand. So if we say today's ECH clients require a publicname extension, future ECH clients can still support a publicip extension that relaxes the publicname requirement. Today's ECH clients will just ignore the publicip-using configs, which is the desired behavior. But I should say I have no stake in this IP mess and do not care whether we allow those. Just pick something that everyone's happy with, please. was an attempt to punt this out of the draft while keeping all the existing semantics unchanged (hence required extension; the correct reading is that it is purely an encoding change).\nThat point about making it hard NOT to provide a fallback makes me more inclined to reject . If the effect is just a change in encoding, it's a less efficient encoding and one that is harder to validate properly. If the effect is to make it optional under certain conditions, you have just argued effectively for why that has negative consequences.\nIt's really a different point and maybe deserves it's own issue but since I'm not logged into github and email works... The \"critical\" bit in x.509 extensions IMO didn't work. IIRC we really designed it for the basicConstraints extension, as that had to be understood by relying parties, but the result was that we more or less only had one shot at introducing any critical extensions - unless I'm forgetting stuff (which is entirely possible:-) pretty much all later attempts to add new critical extensions failed as they'd break deployed clients. We'd have been better off adding the is-a-ca flag and anything else we knew to be critical the TBSCertificate. (Again IIRC, we put both critical and non-critical stuff into extensions because the UniqueID fields in TBSCertificate that were added in x.509v2 hadn't been useful so there was a reluctance to put new fields in there at that point.) I don't see why criticality is different for ECH extensions. My bet is that any new critical extension will need a new TLS extension code point once the first ECH RFC is issued. So I'd remove the extensions fields we've defined and if not then at least get rid of the idea of criticality. Cheers, S.\nNAME I filed to discuss ECHConfig.extensions -- let's try and keep this issue focused on the IP address question(s).\nAddressed in , closing.\nSome edge cases around the publicname: What if the public name is some random string which is not a valid DNS name? (What's the right terminology here? Same from RFC6066?) Does the client reject the ECHConfig? My feeling is yes. Imagine you have some random service accessible on a private address. Today, it can assume it'll never see in the servername field. Even if an attacker pointed at that private address, the attacker is constrained by valid DNS strings. But the public name is never directly queried in DNS. Is a server deployment allowed to use an IP certificate for the ECH rejection identity? (That was actually the original idea for the fallback flow.) If so, how do they specify it? Do they use the ASCII serialization of the IPv4 or IPv6 address? Note that IP addresses do not go in server name, so the client would need to recognize this and leave server_name empty. NAME\nI imagined the outer name would be set from the config similar to how an application sets the inner name. That is, one provides a string to and, internally, the library determines whether or not that string is a valid domain name (and thus goes in the server name) or is an IP address (and thus is used to verify the IP-based cert that might come back from the server). ASCII serialization of addresses seems fine here, if one just takes that string and passes it to the function (or equivalent).\nWell, OpenSSL and derivatives have a lower-level API and just take the server name as an opaque string. :-) But yeah, that seems a reasonable model. Whatever we pick, we should write it down. What do you think should happen if the public name is neither a valid DNS name, nor a valid IP address? Like . ECHConfig parse error would be my inclination.\nYeah, that seems like a configuration error on behalf of the server.\nThat seems like it's already an unsafe assumption, right? Malicious clients could send SQL-injection strings to vulnerable servers without ECH. I suppose ECH introduces some asymmetry; one malicious DNS server could cause N clients to send the malicious payload.\nBacking up a step, if we care to protect applications from malicious/malformed DNS hostnames, it seems prudent to validate all incoming server_name values on the server, whether they came from ClientHelloInner/ClientHelloOuter or even a non-ECH ClientHello. (Assuming we're using the RFC6066 HostName definition.)\nNAME yeah, that's basically what I was thinking the function would do. Would you have cycles to prep a PR to resolve this issue?\nIt depends. Services listening on private addresses often make a lot of assumptions. These assumptions are pretty questionable, but we put in effort to anyway because that is reality. Yes, any server listening on a network should tolerate arbitrary input and behave well. Nonetheless, the reality is private services are often buggy. Where we break soft assumptions that might have previously (mostly) worked, we should at least think about it. Here, it seems there's no point in breaking the assumption.\nAh, servers at private addresses was not even on my radar. I support client validation of ECHConfig.publicname at parse time AND server validation of all ClientHello* servername extensions. NAME Happy to do a PR.\n(I don't think the ECH spec should say anything about server validation here. The server looks at whichever ClientHello it selects and does whatever it would normally do with the server name in the selected one.)\nThere doesn't seem to be a standard that reflects actual current practice. Chrome and Firefox accept SNIs with underscores, which are not allowed in valid DNS names (URL).\nYep, thanks for raising that, Carrick. Perhaps we should make hostname validation a SHOULD rather than MUST?\nThe issue doesn't sound like a MUST vs. a SHOULD but rather what they SHOULD or MUST do. I guess we could say \"do whatever you usually do with the SNI.\"\nOh and yeah, SHOULD seems like a better idea than MUST since there are conflicting opinions about the right way to go about it.\nIt's a little tricky due to what layers we're talking about. Chrome doesn't particularly validate the SNI at the TLS level. Rather, we leave that to URL parsing and DNS logic earlier in the pipeline. If we've gotten far enough to make a TLS connection, we assume we're happy with the string we're passing to SNI and certificate validation. I imagine this is reasonably common: OpenSSL likewise treats the setservername API as an opaque byte string. ECH breaks this assumption by bypassing all the earlier layers. Suddenly a TLS-level construct (ECHConfig) is fabricating a DNS name on its own, and now TLS needs to pick up all the validation.\nTechnically couldn't DNS logic validate the DNS name in the ECHConfig?\nHi folks, PR is a little more complete now, if you get a chance :)\nTo close the loop on this: it could, but we're treating the ECH parts of SVCB as opaque to DNS. [edit: I miswrote \"opaque to TLS\" at first] I think that's desirable because TLS may introduce new ECH extensions or ECH versions that the DNS stack may not be aware of. Also, any validation on the URL parsing / DNS query path happens in the process of making a request to the DNS server at all, whereas any validation here would be a check afterwards on the result. Whatever we do, it'll end up looking like someone, DNS or TLS, implementing a check we need to write down.\nTo come back to this: because SNI does not support IP addresses, IP certs must be the server's default-cert, i.e. they must be served to requests that omit the SNI. Thus, we merely need a way to tell clients to omit . My suggestion is to allow to be empty, meaning \"omit outer SNI\".\nKeeping in mind that the client uses to authenticate the server's , the client now needs to be aware of which IP address it's connecting to. I suppose this isn't a protocol concern, but it's more ergonomic when the ECHConfig contains all the information TLS needs to do ECH.\nOK. As I mentioned yesterday, I think should probably be an X.509 .\nI do not think GeneralName will work. There's somewhat a mismatch between what the application thinks of as identities and how those identities are encoded into X.509. GeneralName is part of the latter. What does a browser certificate verifier, which only believes in things that fit in a URL, do if it's told the identity is now an EDIPartyName? What SNI, etc., does that translate into, given that our identity-to-TLS-config algorithm takes an HTTPS origin, not an X.509 name? (Note that the type field in SNI is fiction. There is and will be .) More importantly, this glomming too many things into an already overloaded issue. If you want to pursue this, can you please fork GeneralName into a separate topic? The status quo in the existing text is that the public name is clearly only good for DNS or DNS+IP, depending on how much you believe in URL syntax. We already have a need to better specify that case, including client validation. Let's fix that, and then we can separate tackle more complex cases.\nStrongly disagree. GeneralName was a reasonable guess at a thing to do 20 years ago but is no longer a structure that ought be leveraged - we have no need for EDI nor X.400 names for ECH and even tempting someone to write code for ORAddress handling is risky - there's >1 can of worms there all of which are best avoided. S.\nAgree with Stephen. Using GeneralName is like hunting butterflies with an elephant gun. :)\nAddressed in , closing.\nThis seems like it would prohibit URL NAME NAME would this be a problem for the DNS-SD case?High-level, I think that we need to agree on what this value is for first. Editorially, I think that this text needs more space to move. Trying to cram all this stuff into a single paragraph isn't doing us any favours. Make a new section for this. It's certainly important enough to do properly. My sense is that the right answer to the underlying issues is that this is a reference identity rather than something we put in servername in ClientHelloOuter. In that case, the fallback is achieved by establishing a connection and validating that the entity providing the updated configuration is able to answer for this reference identity. You then construct ClientHelloOuter in such a way as to ensure that you are able to successfully connect to that reference identity if ClientHelloInner cannot be accepted. If this is just a name, then this is over-specified. If the goal is to describe what goes in ClientHelloOuter server_name, none of the validation is necessary. If the server wants you to put junk in the field, then I don't see how that junk would be any more identifying than an identifying DNS name. You really only need special IP address rules if you intend for this to be a reference identity. The Server Name Indication (SNI) extension in TLS has a provision to provide names other than host names[1]. None have even been defined to my knowledge, but it's there. OpenSSL (and possibly others) have had a long-standing bug[2] (fixed in master) that means that different types of names will cause an error. To be clear: I live in a glass house and am not throwing stones; these things happen. However, it means that a huge fraction of the TLS deployment will not be able to accept a different name type should one ever be defined. (This issue might have been caused by the fact that the original[3] spec didn't define the extension in such a way that unknown name types could be skipped over.) Therefore we (i.e. BoringSSL, and thus Google) are proposing to give up on this and implement our parser such that the SNI extension is only allowed to contain a single host name value. (This is compatible with all known clients.) We're assuming that since this is already the de-facto reality that there will be little objection. I'm sending this mostly to record the fact so that, if someone tries to define a new name type in the future, they won't waste their time. If the community wishes to indicate a different type of name in the future, a new extension can be defined. This is already effectively the case because we wouldn't fight this level of incompatibility when there's any other option. (I think the lesson here is that protocols should have a single joint, and that it should be kept well oiled. For TLS, that means that extensions should have minimal extensionality in themselves and that we should generally rely on the main extensions mechanism for these sorts of things.) [1] URL [2] URL – note that the data pointer is not updated. [3] URL Cheers AGL Adam Langley EMAIL URL"} {"_id":"q-en-draft-ietf-tls-esni-96e4deec6af2269676d05dbef0cc19ec289656061425f2e564302dac1f47edaf","text":"This should probably land after . cc NAME\nI don't know the answer to this question, but, from the TLS 1.3 RFC: I noticed this while testing with a pre-baked ClientHello that had both extensions. The ECH draft clearly required that the extension be removed, but I wondered about .\nGiven URL, URL, and URL, I'd say quite the opposite.\nAh, I see text has been added saying it must appear in the ClientHelloOuter. Thanks for the references. I was working from draft-10.\nHere's a fun case. Suppose the client talks to a 0-RTT-capable, ECH-capable server. It'll send earlydata + presharedkey in ClientHelloInner, wrap that in some ClientHelloOuter, and follow it up with early data records. Now the server rejects ECH and handshakes with the ClientHelloOuter. But the client has already sent early data, so the server needs to know to skip past it. I believe this means we need to require the client mirror ClientHelloInner.earlydata in ClientHelloOuter, even though it's not actually offering to resume anything. Sending earlydata without presharedkey is slightly weird (if tolerated at all), so we might even want to recommend a fake outer presharedkey. We almost_ don't care about this: the client may as well stop the handshake at server Finished and skip the client Finished flight anyway, for purposes of the recovery flow. However: What's not what the draft currently says If the server rejects ECH, handshakes with ClientHelloOuter, and sends HelloRetryRequest, we do care about the client's second flight getting through. That second case is extra fun because the client-facing server must look for the second ClientHello too.\nWasn't this already the outcome from ? (That is, if the inner CH has a PSK, then the outer one ought have a dummy PSK, too.) As for mirroring earlydata, why can the server not just ignore the application data (as if it received earlydata and decided to ignore it)? Taking a quick look back at RFC8446, the server behavior where clients send application data without early_data is not defined, so are you concerned about that, or something else?\nWell, I think we only got as far as MAY in , but yeah I think this is another reason. Regarding ignoring application data, our server implementation only skips early data if there was an early_data extension that we rejected. I believe that matches RFC8446 since it doesn't say to skip application data in general, just that case. And, yeah, that's the concern.\nSince this is behavior for the client-facing server, which supports ECH, can't we specify the desired behavior in this spec? That is, we might say that a client-facing server which rejects ECH MUST ignore any early data sent by the server, or something. Would that work?\nWe could, but then it'd break the nice property that a plain RFC8446 server will handshake ClientHelloOuter without any fuss. This property is great for rollback safety. (Otherwise any ECH server deployment needs to be two-stage, and introduces a lower bound that you cannot rollback beyond.) And elsewhere we've tried to avoid contradicting RFC8446. Yet another option would be to tell the client to retry on certain kinds of errors if it offered 0-RTT in ClientHelloInner, but that's a bit more fuss since retries are often outside the TLS stack. (Besides, the fact that you offer 0-RTT in ClientHelloInner is all but public anyway. While we don't have a cleartext 0-RTT marker[], it's pretty obvious from timing that some post-ClientHello record came before ServerHello.) [] In TCP... I want to say QUIC does, but I could be misremembering.\nTrue! This is fun. :-)\nI'm not sure why ignoring stuff is fuss. What's the alternative? To terminate the handshake if the server receives early data? Wouldn't that be more fuss? If RFC8446 doesn't specify what to do in this situation, I'm not sure why specifying it is in contradiction to RFC8446 (as opposed to expanding on it).\nI think RFC8446 does say what to do in this situation. Section 5.2 says at the bottom: URL The default for the record layer is that decryption errors are fatal (as it should be). Early data skipping is an exception to this described in section 4.2.10, and the exception only applies when the server declines an earlydata extension. Oh no, the goal is to ignore stuff. That's what makes the recovery flow work. The issue is that, in RFC8446, servers only ignore the early data if there was an earlydata extension in ClientHello which they declined. Otherwise, it's just a fatal decryption error. In order to meet that condition, ClientHelloOuter needs the early_data extension whenever ClientHelloInner does.\nACK -- thanks for closing the loop!"} {"_id":"q-en-draft-ietf-tls-esni-54dc0e344e46f27b139f6d9ae30ef83379097042dea318c0b5e2bef4e9d4cd3d","text":"Not sure if this is needed, but I guess it would be another way to generate failures;-)\nGood clarification! This was already somewhat in the real-ech section, so I just replaced this with a forward pointer there."} {"_id":"q-en-draft-ietf-tls-md5-sha1-deprecate-feb7a78641fc2426a0ce9fa728380f8b2f357713673065a8e2bc082e9570326c","text":"NAME thanks, updated with the additional changes.\nNAME NAME NAME Feel free to merge when you get a chance\nGreat! Now on to Daniel's comments."} {"_id":"q-en-draft-ietf-tls-md5-sha1-deprecate-820e8a4d73fdcc5e8b30b85d8bcae2e65f4ea967b60a2a6d8484e7453d751c9d","text":"cc NAME NAME NAME\nI believe this I-D should still include the \"update: 5246 (if approved)\" header, even though we took out the 5246 OLD/NEW section. We are still proposing that 5246 be changed to require signature_algorithms extension be sent and that MD5/SHA1 never be sent. In other words, I think you can merge this PR assuming there are no conflicts.\nVerifying that this is okay on list, but Daniel suggested the following change for s6: Add the following immediately bfore the OLD/NEW, i.e., right after \"due to MD5 and SHA-1 being deprecated.\": In Section 7.1.4.1: the following text is removed: If the client supports only the default hash and signature algorithms (listed in this section), it MAY omit the signature_algorithms extension.\nResulted in a number of changes - see .\nI think you also need to update the abstract and introduction sections, as they both mention \"This document updates ...\" (same as URL). Will probably conflict with URL so we should merge that first.I believe this can be merged as is. I.e., don't remove the \"udpates\" header, change the abstract, or intro."} {"_id":"q-en-draft-ietf-tls-md5-sha1-deprecate-6eb0c99a0ce951650a87d5230b2fd09367f7be93fbc3fd7e9b297f28c9c7b300","text":"Addressing Roman's AD comments.\nhas 2 conflicts, not sure how to resolve, the updates look fine. Please merge.\nThere was a conflict with the 8174 reference being in the other branch. I added it in because it's now part of the terminology paragraph.\nPlease address the following IDNits: -- The document seems to lack an IANA Considerations section. (See Section 2.2 of URL for how to handle the case when there are no actions for IANA.) -- The draft header indicates that this document updates RFC5246, but the abstract doesn't seem to mention this, which it should. -- The draft header indicates that this document updates RFC7525, but the abstract doesn't seem to mention this, which it should. Section 1. Editorial. -- s/RFC 5246 [RFC5246]/[RFC5246]/ -- s/RFC 6151 [RFC6151]/[RFC6151]/ -- s/RFC7525 [RFC7525]/[RFC7525]/ Section 1. Editorial. For symmetry with the rest of the text: OLD RFC 6151 [RFC6151] details the security considerations, including collision attacks for MD5, published in 2011. NEW In 2011, [RFC6151] detailed the security considerations, including collision attacks for MD5. Section 1. Please provide a reference for \"Wang, et al\". Is there a reference to provide for the \"the potential for brute-force attack\" Section 6. Editorial Nit. s/RFC5246 [RFC5246]/[RFC5246]/ Section 6. Move the text \"In Section 7.4.1.4.1: the text should be revised from\" out of the \"OLD\" block of text to be its own intro paragraph so that the OLD vs. NEW is a clear cut-and-paste. Section 7. Editorial. s/ RFC7525 [RFC7525]/[RFC7525]/ Section 7. SHA-1 is also not mentioned in RFC7525. Recommend: OLD The prior text did not explicitly include MD5 and this text adds it to ensure it is understood as having been deprecated. NEW The prior text did not explicitly include MD5 or SHA-1; and this text adds guidance to ensure that these algorithms have been deprecated. Section 7. Editorial. Grammar. OLD In addition, the use of the SHA-256 hash algorithm is RECOMMENDED, SHA-1 or MD5 MUST NOT be used NEW In addition, the use of the SHA-256 hash algorithm is RECOMMENDED; and SHA-1 or MD5 MUST NOT be used Section 10.2 Please make RFC5246 a normative reference."} {"_id":"q-en-draft-ietf-tls-ticketrequest-7770c27cd08564df50446692cf19dddf45f59a062d413b7e72fd4e9527a9e80a","text":"Addresses and . The server hint came up in the context of QUIC, e.g., as a possible optimization to let clients know if they should expect any post handshake messages. cc NAME NAME\nLooks like this got merged without any business days to review it :) but otherwise the text looks good, approved.\nEvery day is a business day! ;-)\nFrom NAME\nFixed by .\nYou probably want to be clear that ticket_request cannot appear in HRR, given that you have a section on it.\nFixed in .\nI'm not a huge fan of sending extensions that aren't really needed, but this seems fine. I missed the QUIC discussion (or maybe it's still in my inbox). BTW, I still like the idea of hoisting tickets to the application layer, but ekr seems to think that they belong at the TLS layer. I guess we'll just have to keep disagreeing on that point :)"} {"_id":"q-en-draft-ietf-webtrans-http3-7954fb6457feccf3bb1f575e1276f7e99c130b8a8afe1f93864d18746ad0ec77","text":"This matches agreement at the last interim. We should separate this into a different draft, but for now this will make the draft match implementation.\nPartially addresses . cc NAME NAME\nHttp3Transport currently assumes that server-initiated bidirectional streams and datagrams can only be used for WebTransport. I suggest we prefix each server-initiated bidirectional stream and datagram with a new HTTP/3 frame type for WebTransport. This way, a future standard could define it's own frame type to run alongside WebTransport.\nUsing an H3 frame type for server initiated bidi streams would also make the design more consistent, instead of the asymmetric design there is today. It costs a byte per bidi stream, but it seems worth it for consistency and flexibility. For datagrams, I believe that we'd want to be able to coexist with other H3 extensions, such as MASQUE, and I believe that's possible without any extra overhead by adding a Session -> Datagram flow ID mapping when a WebTransport session is created.\nIf it's a single byte, it's more of a stream type (like the unidirectional stream-type) than a frame type, right? There's a discussion Victor pointed me to in , where it's suggested that datagrams belonging to a WebTransport session use the Flow ID = to the Session ID. It would be nice to avoid the extra state and lookup.\nAh yes, it's more of a stream type. In that case, I'm unsure of whether there's a good way of eliminating the asymmetry. The only option I can think of is to not use a stream type for server bidi streams and only use the frame, like for the client initiated bidi streams? Thanks for the pointer to , it looks like there are a few potential designs for datagrams flow IDs. If in doubt, simpler is probably better. I'll think more about this one.\nI agree we should use the same mechanics for client and server bidi streams.\nAfter thinking about this more, not having a stream type doesn't seem so bad to me and it does provide symmetry. Stream types seem like an obvious design choice, but I believe HTTP/3 is sufficiently constrained that one could infer the stream type based on the first frame on the stream. For example, the control stream MUST start with SETTINGS. In retrospect, having both stream types and frame types is a bit awkward, and it likely would have been cleaner to only have frame types, but that ship has sailed.\nAfter having implemented this for hackathon, I can confirm that the lack of stream preface on server-bidi streams is annoying and requires special casing (I actually simulate receiving one). Let's please add one. I took a bit of a shortcut and treated the stream preface as both a stream type AND a frame type. After peeking at the unidirectional stream input to discover that it's WT, I let the parser run on that and treat it as a frame type, making it symmetric with the bidi case. Is it possible to define a reserved frame type with the same value as the stream type?\nGiven HTTP/3 does not have a stream type varint for bidi streams, I think a potential way forward is to say that the first frame on the bidi stream can indicate the 'type' of the stream, but that there is no explicit type identifier on the wire. Feel free to ignore the below comment, as it's apparently unchanged from January Looking back, SETTINGS is required to be the first frame on the control stream, so the control stream type is a bit redundant and the same is true for QPACK and I believe could be true for server push? Obviously we're not going to redesign HTTP/3 stream and frame types at this stage in the process, but making bidi streams consistent for both perspectives seems preferable to me.\nDiscussed at IETF 110. There seems to be a general agreement that we should make server-initiated bidi streams use the same encoding as client-initiated ones (that is to say, frames). There was some interest expressed in splitting definition of server-initiated bidi streams into their own draft; NAME and NAME are to reach out to httpbis about the appropriate venue for that.\nJust bringing some of the discussion from here: HTTP should provide some consistent guidance for the format of HTTP/3 extension streams. Client-bidi extension streams have to start with a frame or series of frames, to be compatible for H3. One of these frames has to identify the type of stream. This is how WT is currently specified. Unidirectional streams all start with a Stream Type, but after that are inconsistent in H3 the control stream is: Stream Type + Frames QPACK Streams are: Stream Type + QPACK instructions Push streams are: Stream Type + Preface (Push ID) + Frames Stream Type seems like a more general extension mechanism, since it can be used to completely decouple from HTTP/3 frame parsing.\nI find stream types to be less extensible than frames: frames allow you to do anything that stream types do, while additionally providing more opportunity for extensibility. I would prefer server-initiated bidirectional streams to behave like client-initiated bidirectional streams instead of unidirectional streams, as that makes implementations simpler.\nJanuary Me said: So I think I'm arguing against myself and I'm going to stop. My other comments on this thread we that I didn't like was the asymmetry of WT uni streams, which currently are . My past self seemed to be advocating that we change WT uni streams to or perhaps . Do we want general guidance about how extensions should use Stream Type vs Frames on unidirectional streams -- specifically that there should be at least one frame?\nThe place to put that guidance would have been draft-ietf-quic-http, and I think that ship has sailed. For unidirectional streams, I like your framing proposal as that enables extension frames down the road.\nWell, not too late to put out a BCP or something, especially if there's going to be some kind of formal guidance on server-initiated bidi streams. I'll open a separate issue about changing the WT uni framing.\nI think this specific issue was fixed by . Please reopen if I'm wrong."} {"_id":"q-en-draft-irtf-nwcrg-coding-and-congestion-in-transport-640b1fabc393d5d544d5590189b39ebceaa8fcbe21c586c905dd9398c4ef5abb","text":"This proposes a complete re-vamp of the document. I would also suggest to completely remove sections 6 and 7, but rather than doing this yet, I inserted comments that explain why.\nLe 17/01/2020 à 09:28, mwelzl a écrit : Yes ! Definitely agree.\nLe 17/01/2020 à 13:50, francoismichel a écrit : OK for me."} {"_id":"q-en-draft-webtransport-http2-1bd1fc007018974e6ed616a669abcbcf65e6909665b7891c121a4b0a788da314","text":"The definition was a little loose, so I rewrote it. We only need to use 0x0a and 0x0b, which is lucky and it lets us avoid much of the mess that QUIC has around bits in the type. I've also added definitions for the fields in the frame."} {"_id":"q-en-dtls-conn-id-eed97ec2f6bc1387adc82e9592ad18272bafc07cc11512b68336d0af7748c5f9","text":"URL Francesca Palombini has entered the following ballot position for draft-ietf-tls-dtls-connection-id-11: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you for the work on this document. I only have minor comments and nits below. Francesca sending messages to the client. A zero-length CID value indicates that the client is prepared to send with a CID but does not wish the server to use one when sending. ... to use when sending messages towards it. A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID. FP: clarification question: I am not sure the following formulation is very clear to me: \"to send with a(/the client's) CID\". Could \"send with\" be rephrased to clarify? The previous paragraph uses \"using a CID value\", that would be better IMO. the record format defined in {{dtls-ciphertext} with the new MAC FP: nit - missing \"}\" in markdown. The following MAC algorithm applies to block ciphers that use the with Encrypt-then-MAC processing described in [RFC7366]. FP: remove \"with\" Section 10.1 FP: I believe you should specify 1. what allowed values are for this column (i.e. Y or N, and what they mean) and 2. what happens to the existing entries - namely that they all get \"N\" value. Section 10.2 FP: Just checking - why is 53 \"incompatible with this document\"? Value Extension Name TLS 1.3 DTLS Only Recommended Reference FP: nit- s/DTLS Only/DTLS-Only to be consistent with 10.1"} {"_id":"q-en-dtls-conn-id-491ca7522e0b2f51c31390861190aafa04e06ed259110342a483e2d9795db4eb","text":"URL The example shows in flight 5 Certificate --------> ClientKeyExchange CertificateVerify [ChangeCipherSpec] Finished (cid=100) Does this imply that the flight 5 is send with cid = 100 or only the Finished? I would prefer to be able to send also the other handshake messages of the flight with a cid, that would make it harder to disturb handshakes by spoofed handshake messages.\nIn DTLS 1.2 only the Finished message is encrypted. In the current design we make use of the CIDs only once encryption is enabled. I should explain that only the Finished message contains a CID here.\nSee additional explanation in PR : URL\nI would enjoy it, if the cid could be extended into the not encrypted handshake messages also. Is there any reason, not to do so? My consideration is to make the handshake \"slightly more\" protected against spoofing. The current protection in RFC6347 is limited to URL and URL . The first applies only to the CLIENTHELLO (\"cookie\" protection again spoofed CLIENTHELLOs). The second (MAC and Anti-Replay) is only usable after a successful handshake. If an attacker spoofs other handshake messages, the attacker may block a handshake from being successful (even if the time window for that is very small and so it would be a rather expensive attack). Using a CID would enable a implementation either to handshake even when addresses are changing during the handshake or to protect the handshake slightly more (not supporting address changes but) use the CID to validate the handshake message belonging to this handshake. I would leave the chosen strategy to the implementation, enabling implementations to flexible chose the strategy depending on the current handshake load (means: supporting address changes in handshakes as normal operation, but switch dynamically to the more protective one, if a certain relative number of handshake fails.) My current work in progress: URL contains already this feature as experimental extension. It's easy to be added to a cid implementation.\nSection 4.1.2 describes an attack where an attacker injects traffic that cannot be correctly processed due to a MAC failure. The reason why MAC failures in DTLS do not lead to a connection termination (unlike in TLS) is that they are much easier to mount: an attacker only needs to know the source and destination IP and port. Section 4.2.1 talks about the cookie exchange, which has been added to prevent amplification attacks against third parties. I would argue that these two attacks are very different from what you are trying to tackle. You would like to add resiliency against on-path (?) attacks sending spoofed handshake messages. I cannot see how the CID provides that extra protection. I understand the use case of address changes during the handshake. The question is whether this is actually something real. If the IP address & port changes during the handshake then this is indeed annoying but how often does this happen?\nI mentioned 4.1.2 and 4.2.1 just to sum up the current protection mechanisms. The attack, I consider, is a off-path. e.g. using PSK, someone spoofs a CLIENTKEYEXCHANGE with the identity \"ghost\". Now it's hard for the server to differentiate the wrong from the right CLIENTKEYEXCHANGE with identity \"me\". With the CID in the handshake, spoofing into the handshake will be much harder. I don't assume, that such attacks will be easy or happen frequently. But assuming something as a \"theft detection\" for something valuable, it may be used to block the reporting message.\nI don't understand why we'd need to enable CID during handshake, and in particular on Finished? Any sane handshake will complete well before any NAT rebinding has a chance to mess with the path (note that in-flight handshake messages refresh the port bindings...). What kind of use case would that address? Is it connection migrations while handshaking? I'm not sure the \"increased robustness against spoofing\" argument is a very strong one: the handshake is either robust as-is or we are in trouble :-) Also, would that imply reconsidering the security analysis of the handshake? My preference instead would be to send tls12cid records only after the handshake has completed successfully, i.e. on the first application_data.\nFMPOV: protects in the long period before the handshake. With the cookie you could not only spoof CLIENTHELLOs an trigger heavy functions. protects in the long period after the handshake. With the MAC filter it's also not possible to spoof messages at that period (or even repeat them). But in the short period (few seconds) of the handshake itself, I couldn't find a straight forward method, to distinguish between the right inner handshake messages and spoofed ones. Using the CID would offer a similar protection for those inner handshake messages as the cookie for the CLIENTHELLO. I don't assume, that such attack will happen too frequently. But I could assume, if someone has a high interest, such an attack may be used to \"deny\" a handshake of a certain peer for a certain time period, e.g. \"theft detection\" for one device may be blocked for a couple of minutes. Only because the CID would offer protection against this \"for free\", if used during the handshake, I started to try to introduce that.\nLet me admit, that the attack in the inner handshake is not likely. Some implementations seems to use the 4.1.2 mechanism even in epoch 0, that may help, but introduces other risks (attack possibilities for the inner handshake). FMPOV, currently you would need too much implementation knowledge to execute such an attack. But, though in my opinion, using the CID within the handshake, offers a protection \"for free\", I think it's worth to be considered.\nSorry, I'm not sure I understand which kind of extra protection we'd get: when handshaking the CID is spoofable as much as the source address in the UDP packet.\nAnother argument for not changing the wire image of the record layer during the handshake phase is possible middleboxes interference.\nThe cookie of 4.2.1 is also spoofable, but for successful spoofing it must be the right cookie for that time period and address. The same applies for the CID. If combined with the address, then it protects, because a spoofer must know the CID. (And sure, it's visible and .. but the cookie is also visible. The point is also the time period, when this CID could be used for spoofing. And that's very short, comparable with the short valid time-period of the cookie.)\nI'm still confused, could you characterise precisely your attacker in terms of its goal and position in the network? Thanks!\nI don't want to change the wire image more, than this draft already does! With outer type and inner types nothing additional to the already defined additions is introduces, except that it is used already within the handshake. If you want, I update my wip URL and document, how to use a CID in the handshake. Then you see in wireshark, how it works.\nWith outer type tls12cid(25) and inner types changecipher_spec(20), alert(21), handshake(22) nothing additional to the already defined additions is introduced Yep That's the trouble. Most \"enterprise security\" middleboxes want to be able to look into the handshake to determine whether a connection is ok according to their policies. After the handshake completes and the connection is considered alright, they just act as blind packet forwarders. To do the handshake inspection they depend on the established wire image of the TLS/DTLS protocol. If we introduce the ability to change the wire format during handshake (as opposed to the post-handshake phase only) we might end up impairing clients that sit behind those boxes.\nGoal: Deny successful handshake of a selected peer. Position: off-path, but know the peer's address and the interesting time period. Something missing?\nSmells for me, that a \"list of typed cid\" (see ) may be used, to enable a peer behind such a box to inform the other, not to use the CID during the handshake. types: \"plain\" - cid only after handshake \"plain+\" - cid also during handshake\nJust for my personal interest: can you name a middlebox, which supports DTLS?\nSorry, now you triggered me: Why should those middleboxes forward my tlscid record, when my address has changed, but block my tlscid record, when it contains a handshake? If the middleboxes are unaware of this draft, but forward my tlscid record even with changed address, I would assume, they forward my tlscid record also with a inner handshake message.\nDeny successful handshake of a selected peer. Position: off-path, but know the peer's address and the interesting time period. OK, thanks very much. Frankly, the combination of perfect timing as well as the randomization provided by the source port makes the attack window pretty narrow. On the other hand, allocating state too early (i.e., allocating CIDs without strong commitment from client side) opens up an interesting amount of attack surface on server side.\nThis is a tricky question. Boxes like typically have scripting capabilities which give their admins lots of leeway. I do not build nor operate any of these so my direct experience is ~0, but speaking with the Symantec guys exactly about how to modify the DTLS wire image to signal the presence of CID in a middlebox-tolerant way, their feedback was \"to be sure, just leave the handshake alone as much as possible\"... I have no reason to doubt their assessment, also given the ways TLS 1.3 was hit in this same space.\nAbove I wrote nothing else. ??? what's that ??? Why should the use of cid in the inner handshake messages as well allocate anything additionally? The cid is provided in the CLIENTHELLO and/or SERVERHELLO, it's currently just not send in the inner handshake messages. But there is not more or less to allocate, if they are used with other handshake message or not.\nI have also no reason to doubt them, but I'm very unsure, if an address change will then work at all with such boxes. I would really prefer to test such an address change. I couldn't find any hint, that that box supports DTLS at all.\nSo the critical issue seems: with TLS there are bad experience changing the records with DTLS this experience seems to be missing. Some expect them to be similar bad. I unfortunately suspect them to be either a real disaster (address change not working at all), or not that hard. Without boxes to test and/or people with DTLS-boxes experience, it will be hard to find a solution. So, try to use the mailing list to get more feedback?\nSorry, I fail to understand your argument. What to do with an unknown 5-tuple vs what to do with an unknown C-T in a context where you expect to understand the semantics of the flow seem orthogonal to me.\nI don't really understand the above sentence so I know a reply is quite risky :-) Anyway, what I wanted to say is that if you want to use CID during handshake in any meaningful way, it means the party requiring it (e.g., server) needs to allocate state internally to be able to check it on receive. If this allocation is triggered only by one (or two) CH(s), it's pretty easy to mount a \"CID exhaustion\" attack against the server. Requiring the client to complete the handshake before promoting a \"2-tuple\" connection into a \"CID\" connection makes the attack much more demanding and therefore a lot less likely.\n(some overlap in the discussion :-) so a updated my comment with the question above.) Try to explain it: Which rules should a box apply to a tls_cid record? It's only valid after a successful handshake? But, how should such a record then be assigned to that successful handshake? Either by knowing the CID (which requires to be aware of this draft), or much weaker by the address, but then it will not work with address changes.\nOK, so you don't allocate something on the server side for the CID in the CLIENTHELLO until the handshake is finished? Or you send a CID in the SERVERHELLO without allocating it before the handshake is finished? May be I'm too focused on my implementation, but without an other implementation, which shows what you wrote, it's also hard for me to understand your comments.\nAt least, I think you target the CID the server side is using. How is it ensured, that the CID in the SERVER_HELLO is unique at least for all established connections and all ongoing handshakes, which are not terminated at that time point?\nIn my message above s/CID exhaustion/resource exhaustion/ sorry... And yes, you can certainly separate the act of reserving the CID when you decide to send it in the CH from creating state associated with the CID-based session lookup paraphernalia. For example you could use a big enough CID space (say 8 bytes) and reserve your CIDs by means of a monotonically increasing function. And at a later time (my proposal is after the handshake is completed successfully) you grab the memory needed for the CID based session lookup stuff...\nAnother point is that we'd put more bytes on the wire, increasing the risk of message fragmentation with small MTUs in a situation, the handshake, where the timing of the messaging is such that NAT timeouts are very unlikely and therefore the 5-tuple based demuxing more than adequate.\nI can't see a relation of \"resource exhaustion\" with the usage of the CID in the handshake.\nThat's right. In my experience, \"small MTUs\" are rarely used. The highest risk is caused by a larger CLIENTHELLO, which then gets fragmented (see \"resource exhaustion\"). Though CID makes the CLIENTHELLO larger, it's more an argument, that CID may not be support on such path. FMPOV, a \"optimal support\" for peers with a very narrow link, should be moved into a other discussion.\nThe part with having the choice to either \"protect the handshake\" or \"support address changes\" should just leave it to the implementation, which strategy is chosen. I would go for a configurable solution, which enables a \"dynamic protection\", if too many handshakes are ongoing.\nI had a chat with Achim and he agreed that we can leave the feature of using CIDs for handshake messages for DTLS 1.3. This will allow us to get this work done faster. DTLS 1.3 encrypts handshake messages earlier and hence already the handshake itself utilize CIDs.\nNAME Thanks for your time!\nHi Hannes, Achim, Good, thanks both for taking the time to sort this out. I think it's a very reasonable choice to not introduce scope inflation at this (late) stage. One thing we still need to do though is tighten up a bit the language around when exactly the CID is put on the wire. Something like: \"Once a non-empty CID is successfully negotiated, it MUST be sent on the XYZ message and on all subsequent records of the connection\" would remove all the uncertainty I guess. I personally have an inclination for XYZ to be the first application data, mainly because I'd like to commit any resources only after I have full confirmation that the client is a good one. However, I don't care whatever choice we end up with, as long as it's written down precisely in 2119 language.\nI'm not sure, what \"commit\" includes or means exactly. To verify the FINISH you will need the \"keying material from the crypto context (derived from the specific mastersecret)\". According URL a good estimation will be about 128 bytes. In your wording, you use them before you \"commit\" them? So adding a couple of bytes for CID to that storage, should not make a difference, or?\nI mean making a certain CID \"used\" (not just reserved) and making room for the CID in a lookup table. But, as I said, I won't be unhappy if the \"first CID record\" happens to be something different than the application data.\nFrom my side the specification is also free to chose, when the peer should start to use the tls_cid records.\nNAME I'm still not sure, if I really understand the requirements of your approach. My current feeling is, that you store all the keying and cid stuff in a \"context\" and provide a address:port-lookup-table for that stored context. One finish, you then want to store this context into a cid-lookup-table. If that's the plan, a cid usage even in the handshake doesn't matter. For epoch 0 records, just still use the address:port lookup and verify the record cid with the stored one. I personally prefer a \"relaxed definition\", but your case will not be wrong. It just doesn't support address changes during handshake. I guess, after a period of experience, the most user will select that option and use it to secure the handshake without supporting an address change. But that would be left to future experience. After the call with NAME I think it's better not to extend this draft with that idea and instead focus on DTLS 1.3. Therefore I removed the code for this extended usage from my wip and closed this issue. In the meantime, it's really hard to collect all statements over several issues. So, if you still interested in using the RFC6347 record for the finish and start with the tls12_cid records after that, I think, open a new issues (or PR) would make it easier to fetch up this idea.\nIf this feature is negotiated on, then the MAC calculation changes, even if no connection ID is sent, because of . This needs to be clearly sign-posted.\nMy preferred interpretation is: uses the original RFC6347, section 4.1 record format URL indicated by the already used content type ids. That includes also the unchanged MAC.\nCreated a pull request to add clarifying text: URL\nNAME it would be great, if you could explain, if you want to use the new record type with an empty CID in order to obfuscate the DTLS payload length. Please note, that currently the specification of this draft turns more into a \"strict\" usage (see issue ) which will not support the new record type with an empty CID.\nInteresting question. Yes, I would like to be able to use the new record type to pad records, but I don't think that it is an especially important consideration. If the design is easier without that, and I suspect that it is, then I am OK with not doing that. A one-octet connection ID is not that much of a burden to pay to get this feature.\nSo you would then got for adding a 1 byte cid, but still use the ip-address:port to map it to the security context (keying stuff)? Or do you consider only a few connections? For me that is just one of the scenario's, I'm really afraid of! It makes the usage of cid more indeterministic, if the spec \"ignores\" that kind of usage instead of explicitly considering it! FMPOV, it will be easier, to support then a empty CID record indicating, that still the ip-address:port must be used instead of a 1 byte cid, but then being not strict on the mapping.\nAddressed in submitted version -04\nIn Figure 2, parentheses are used to signify both extensions (connection_id=100) and the addition to the record header (cid=100). I suggest using a different notation here.\nNAME can you take a crack at fixing this\nHere is an update: URL\nUpdated diagram merged into the draft."} {"_id":"q-en-dtls-rrc-2a06dc674e51c4ca96c43725e2f1282d4b62acaf31d06b5dd87302de7aca4476","text":"This PR is on top of\nWe probably need to create a new IANA registry for RRC message types. The registry should be initialised with the 3 message types we define in the draft.\nOk."} {"_id":"q-en-dtls-rrc-a4ea99957dde331ed7b19732972ce76c378698861bdf74ba361d4d9e5d956bea","text":"Signed-off-by: Thomas Fossati\nCC NAME\nWe don't currently discuss any privacy implication associated with CID reuse. We should at least: document how to ensure a new CID is used during path validation in 1.3 flag that 1.2 has no way to work around linkability because CID have the same lifetime as the session"} {"_id":"q-en-dtls-rrc-a301d8ad3868bbd1da3d6a731144440ee49aa382def826f4ae319cffbda797db","text":"/cc NAME Signed-off-by: Thomas Fossati\nIn this second paragraph, does this mean T=1s or RTT=1s ?\nFMPOV: (because that refers to the default timeout of DTLS 1.2 RFC6347)"} {"_id":"q-en-dtls13-spec-1684f288f41e73a766c2f8f8cd0a0c0761b5304183ffea2bf5e136d6bea9a226","text":"Attempt to address though I'm not entirely happy with this text. Having gone through the existing text looking at \"vulnerable\" and \"susceptible\", it doesn't seem in particularly bad shape, so my comment there may have been stronger than needed.\nCopied from EKR's repo Ben Kaduk wrote: \"We talk in a couple places about datagram protocols being “vulnerable” or “susceptible” to DoS attacks, which leads me to at least partially read that as meaning that the protocol’s own service will be disrupted; as we know, this is not the whole story, as the reflection/amplification part can facilitate DoS attacks targeted at other services/networks. So perhaps some rewording is in order. \"\nNAME I would welcome a PR here.\nNAME did this,"} {"_id":"q-en-dtls13-spec-cab61e949a5ef26427246668f320dc5c039603672154c4d296f99edde163691f","text":"As proposed on list.\nHanno Becker points out that the current text does not cryptographically protect the CID when we have coalesced packets. I created PR to add this feature, but I think instead we should simply require that all packets have CIDs once the CID is established. This seems like a net reduction in complexity. There would be two changes here: Re-emphasize the point already in the DTLS CID spec that you abort if a CID is expected and none is found. Remove the text about allowing implicit CIDs.\nNAME NAME\nThis was resolved by . Please re-open if that's not the case!"} {"_id":"q-en-dtls13-spec-5c131a325c54c0458a1923be5a1142d1e93dc7f55b8674d1c09c6e41df585949","text":"NAME NAME\n(Probably want to fix the typo for \"Fixes\" in the PR title, but the content looks good to me)\nDTLS 1.3 does not really change this aspect compared to earlier DTLS versions. We could reference a number of the other TLS WG documents that attempt to reduce the size of the certificate message, such as Client Certificate URL Cached Info Certificate Compression Following the guidelines in RFC 7925 Other certificate types It may be possible to use the fragmentation mechanism to send one certificate after the other in the certificate chain. Is this what you had in mind?"} {"_id":"q-en-dtls13-spec-2a1ab1bf25c1b53cf0756a9a81ba1d94722ac5ddb2185bc0b4ab015698679814","text":"This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. TSV-ART Review of draft-ietf-tls-dtls13-41 Reviewer: Bernard Aboba Summary: The timeout and retransmission scheme looks workable for common cases, but could use some refinement to make it more robust. Technical Comments Handling Invalid Records Unlike TLS, DTLS is resilient in the face of invalid records (e.g., invalid formatting, length, MAC, etc.). In general, invalid records SHOULD be silently discarded, thus preserving the association; however, an error MAY be logged for diagnostic purposes. [BA] How does silent discard of invalid records interact with retransmission timers? Implementations which choose to generate an alert instead, MUST generate error alerts to avoid attacks where the attacker repeatedly probes the implementation to see how it responds to various types of error. Note that if DTLS is run over UDP, then any implementation which does this will be extremely susceptible to denial-of-service (DoS) attacks because UDP forgery is so easy. Thus, this practice is NOT RECOMMENDED for such transports, both to increase the reliability of DTLS service and to avoid the risk of spoofing attacks sending traffic to unrelated third parties. [BA] \"this practice\" refers to \"generate an alert instead\", correct? Timer Values Though timer values are the choice of the implementation, mishandling of the timer can lead to serious congestion problems, for example if [BA] Saying \"timer values are the choice of the implementation\" seems odd, because it is followed by normative language. I would delete this and start the sentence with \"Mishandling...\". many instances of a DTLS time out early and retransmit too quickly on a congested link. Implementations SHOULD use an initial timer value of 100 msec (the minimum defined in RFC 6298 [RFC6298]) and double the value at each retransmission, up to no less than 60 seconds (the RFC 6298 maximum). Application specific profiles, such as those used for the Internet of Things environment, may recommend longer timer values. Note that a 100 msec timer is recommended rather than the 3-second RFC 6298 default in order to improve latency for time- sensitive applications. Because DTLS only uses retransmission for handshake and not dataflow, the effect on congestion should be minimal. Implementations SHOULD retain the current timer value until a message is transmitted and acknowledged without having to be retransmitted, at which time the value may be reset to the initial value. [BA] Is it always possible to distinguish a retransmission from a late arrival of an original packet? This seems like it could result in wrongly resetting the timer in some situations. Large Flight Sizes DTLS does not have any built-in congestion control or rate control; in general this is not an issue because messages tend to be small. However, in principle, some messages - especially Certificate - can be quite large. If all the messages in a large flight are sent at once, this can result in network congestion. A better strategy is to send out only part of the flight, sending more when messages are acknowledged. DTLS offers a number of mechanisms for minimizing the size of the certificate message, including the cached information extension [RFC7924] and certificate compression [RFC8879]. [BA] How does the implementation know how much of the flight to send? Not sure how prevalent large certs are for DTLS (e.g. compared with the self-signed certs of WebRTC), but in EAP-TLS deployments, large certs have caused problems. The EAP-TLS cert document draft-ietf-emu-eaptlscert cites some additional mechanisms for reducing certificate sizes, such as draft-ietf-tls-ctls and [RFC6066] which defines the \"clientcertificateurl\" extension which allows TLS clients to send a sequence of Uniform Resource Locators (URLs) instead of the client certificate. Alert Messages Note that Alert messages are not retransmitted at all, even when they occur in the context of a handshake. However, a DTLS implementation which would ordinarily issue an alert SHOULD generate a new alert message if the offending record is received again (e.g., as a retransmitted handshake message). Implementations SHOULD detect when a peer is persistently sending bad messages and terminate the local connection state after such misbehavior is detected. Note that alerts are not reliably transmitted; implementation SHOULD NOT depend on receiving alerts in order to signal errors or connection closure. [BA] For the fatal alert case, it does seem like retransmission would be a good idea; otherwise the peer can be left hanging. Section 7.1 \"Disruptions\" such as reordering do not affect timers, correct? ACKs SHOULD NOT be sent for these flights unless generating the responding flight takes significant time. What is \"significant time\"? Editorial Comments (NITs) Section 2 The reader is also as to be familiar with [BA] \"as\" -> \"assumed\" Section 3 The basic design philosophy of DTLS is to construct \"TLS over datagram transport\". Datagram transport does not require nor provide reliable or in-order delivery of data. The DTLS protocol preserves this property for application data. Applications such as media streaming, Internet telephony, and online gaming use datagram transport for communication due to the delay-sensitive nature of transported data. The behavior of such applications is unchanged when the DTLS protocol is used to secure communication, since the DTLS protocol does not compensate for lost or reordered data traffic. [BA] While low-latency streaming and gaming does use DTLS to protect data (e.g. for protection of WebRTC data channel), telephony and RTC Audio/Video uses DTLS/SRTP for key derivation only, and SRTP for protection of data. So you might want to make a distinction. Section 3.1 Note that timeout and retransmission do not apply to the HelloRetryRequest since this would require creating state on the server. The HelloRetryRequest is designed to be small enough that it will not itself be fragmented, thus avoiding concerns about interleaving multiple HelloRetryRequests. [BA] I would add \"For more detail on timeouts and retransmission, see Section 5.8.\" Transport Layer Mapping DTLS messages MAY be fragmented into multiple DTLS records. Each DTLS record MUST fit within a single datagram. In order to avoid IP fragmentation, clients of the DTLS record layer SHOULD attempt to size records so that they fit within any PMTU estimates obtained from the record layer. [BA] You might reference PMTU considerations described in Section 4.4. Post-handshake client authentication Messages of each category can be sent independently, and reliability is established via independent state machines each of which behaves as described in Section 5.8.1. For example, if a server sends a NewSessionTicket and a CertificateRequest message, two independent state machines will be created. As explained in the corresponding sections, sending multiple instances of messages of a given category without having completed earlier transmissions is allowed for some categories, but not for others. Specifically, a server MAY send multiple NewSessionTicket messages at once without awaiting ACKs for earlier NewSessionTicket first. Likewise, a server MAY send multiple CertificateRequest messages at once without having completed earlier client authentication requests before. In contrast, implementations MUST NOT have send KeyUpdate, NewConnectionId or RequestConnectionId [BA] \"send\" -> \"sent\" Example of Handshake with Timeout and Retransmission The following is an example of a handshake with lost packets and retransmissions. Note that the client sends an empty ACK message because it can only acknowledge Record 1 sent by the server once it has processed messages in Record 0 needed to establish epoch 2 keys, which are needed to encrypt to decrypt messages found in Record 1. [BA] \"encrypt to decrypt\" -> \"encrypt or decrypt\"? Section 7.3 In the first case the use of the ACK message is optional because the peer will retransmit in any case and therefore the ACK just allows for selective retransmission, as opposed to the whole flight retransmission in previous versions of DTLS. For instance in the flow shown in Figure 11 if the client does not send the ACK message [BA] Figure 11 is the DTLS State Machine. Are you referring to another figure? The use of the ACK for the second case is mandatory for the proper functioning of the protocol. For instance, the ACK message sent by the client in Figure 13, acknowledges receipt and processing of record 4 (containing the NewSessionTicket message) and if it is not sent the server will continue retransmission of the NewSessionTicket indefinitely until its transmission cap is reached. [BA] Do you mean \"maximum retransmission timemout value\"?\nAddressed in"} {"_id":"q-en-dtls13-spec-49e1629cf2ed317a26fcca6c5093190790e150affbe38187f8b7b064cec9832a","text":"Yet another attempt to This builds on Chris's PR but just omits the epoch entirely from the AEAD nonce calculation. This is more consistent with TLS and doesn't require special case reasoning about why we only need to build in the bottom 16 bits. This relies entirely on the keys being different. Needs analysis.\nThis seems promising.\nNAME this LGTM, but there's a conflict. Can you resolve prior to merging? NAME are you OK with the new rationale, and text that allows it to be relaxed in the future?\nIf I have understood and calculated correctly DTLS 1.3 limits the number of packets in a AES-GCM connection to 2^40.5 compared to 2^64 in DTLS 1.2. This is quite a severe limitation that is not mentioned explicitly. The AEAD limits in themselves are not a problem, but the combination of AEAD limits (2^24.5) and a small epoch (2^16) is a problem. I think this is major problem in some use cases of DTLS. E.g. most 3GPP 5G use cases where DTLS or DTLS/SCTP is used for semi-permanent connections (lasting years) and where the terminating the connection cause severe disturbances. I did not find any discussion about this problem (but I could easily have missed it) It is known that the limit of 2^48 packets in SRTP can be reached and 2^40.5 is significantly smaller than this... If the document was not in such a late stage, I would have suggested to increase the size of the epoch to 32 bits..... I think this and the other problems DTLS 1.3 has with semi-permanent connections has to be solved sooner than later.\nNAME thanks for filing this. I hear you about the late stage, but IIRC we never represent the entire epoch on the wire in any case, so it seems like just changing it to 32 bits would be pretty easy, so why not do it now?\nHow do the chacha20 numbers compare to the AES-GCM ones?\nNAME I don't think (D)TLS 1.3 puts any limits on ChaCha20 so my understanding is understanding is that the total number of packets can be protected in the current drafts is ChaCha20 2^64 (2^48 2^16) AES-GCM 2^40.5 (2^24.5 2^16) AES-GCM 2^39 (2^23 * 2^16)\nNAME I am all for changing this now. I made a PR that change epoch from uint16 to uint32. As you say, it seems like a pretty easy change to do. Is uint32 the right size?\nUnless you think we should do uint64. NAME NAME what would we have to do to make this change?\nURL Magnus Westerlund thinks uint64 would be a safer bet.Michael Tüxen suggest to wrap the uint16 epoch instead. That might be an easier solution. I don't see a direct problem in DTLSPlaintext, ACK message, or recordsequencenumber with a uint16 epoch that is allowed to wrap like: 4, 5, ... 2^16-1, 4, 5, ... 2^16-1, 4, 5, ...\nNAME NAME NAME since it's through the IESG we would need to at least raise that this change was made after they had approved it. I suspect that doing some kind of 2-week heads up to the WG/IESG that we want to make the change before landing it. But, if we do that or something more drastic that's really up to NAME\nI want to point out that allowing wrapping per may not be the best solution if one considers to use the DTLS epoch as part of the input to the TLS exporter as suggested in . Then one want an input that doesn't wrap, and then to ensure that one doesn't practically run into any limits, then an uint64 would be better. An unit32 appears to be sufficient for a quite extreme use case with minimal messages size (which is one DTLS record with 4 byte user message) and high bandwidth (10 Gbps) sustained over a period of more than 5 years before the epoch would wrap. However, to my understanding the full epoch value are so rarely included in transmitted DTLS messages that its size have minimal implication on the overhead that it is not worth saving 4-bytes to be certain to have eliminated any practical limitation.\nI think a 2-week WGLC and notifying the IESG will suffice.\nFYI. We have now run into quite many problems with the interworking of semi-permanent DTLS/SCTP connections (lasting years) and DTLS 1.3: No Diffie-Hellman rekeying No post-handshake server authentication No rekeying of the exporter_secret 2^40.5 packets limit 2 minute MSL requirement 2^38.5 byte SCTP user message size limit Our initial plan was to just replace DTLS 1.2 with DTLS 1.3, leave DTLS/SCTP as is, but as the problems keep adding up we are now startnig to consider quite big changes to DTLS/SCTP 1.3 to make this work: URL Using new DTLS connections instead of KeyUpdate could potentially address the first five, but is a significant change.\nHi, Based on the problems long-lived DTLS/SCTP had with DTLS 1.3 (as well as DTLS 1.2 without renegotiation and respecting AEAD limits) we restructured DTLS/SCTP (RFC6083bis) to use parallel connections instead of relying on renegotiation and post-handshake messages. We think this quite easily solves all the problems we had without requiring any updates of DTLS 1.3. It also makes DTLS/SCTP less dependant on version specific DTLS mechanisms. It will hopefully work without changes with any future DTLS 1.4. URL We can now firmly RECOMMEND DTLS 1.3 which I think is very important. 5G is to a large degree replacing or complementing IPsec with TLS, DTLS, and DTLS/SCTP. I will do my best to transition 3GPP to DTLS 1.3 asap. FYI, 3GPP DTLS/SCTP associations have very long lifetimes (can be years), strive for five nines (99.999%) availability and cannot be teared down without causing major disruptions. With such long lifetimes we think mutual reauthentication, rekeying of the SCTP-AUTH key, and forcing attackers to dynamic key exfiltration [RFC7624] are required features. I still think should be done. 2^40.5 packets is quite limiting.\nSeems OK.Thanks -- this LGTM. We'll need to do a brief consensus call given the type of change."} {"_id":"q-en-dtls13-spec-fb3ad54c2ea0d8ebe209664b60ddb087c776aaa8df4785251961477d07f9f78a","text":"As far as referring to the snake case in the main body, I went and looked at RFC 8446 and it does not refer to the snake case. I tend to think that if we make the change in 5.2 and A.2, that should be enough breadcrumbs for people to figure it out. I put them in the same order as they are in the registry and moved up newsessionticket. Triple check this change please.\nThis I-D registers two TLS HandshakeTypes. All of the other fields in the registry use snake_case."} {"_id":"q-en-dtls13-spec-4381dad45da18f45cf8010565e93228ee58bd31a88f549e5b75aa743f214b692","text":"Right now, my NSS patch for compatibility mode in TLS also includes changes to DTLS. Those changes include the ServerHello format changes, but NSS will ignore the session_id and never send a ChangeCipherSpec from either client or server.\nNote that we probably still want to ignore CCS in DTLS - DTLS discards things that it doesn't want to see - this is about generating CCS."} {"_id":"q-en-dtls13-spec-b7b0f024b7a9cc4a0aaf6d8ae7b9d4eaeb96147a5fe4697c9700bb356b17087c","text":"Addressing issue raised by Martin at URL\nShould the server reject the ClientHello if it appears as though the client is responding to a HelloVerifyRequest? What if it also contains a cookie extension?\nI don't think the server should reject a ClientHello if it thinks the Client should provide a cookie previously sent with the HelloVerifyRequest. This would require the server to keep state and to implement more complex logic in the server-side. If the ClientHello contains a cookie in the ClientHello.legacycookie field in the DTLS 1.3 field then IMHO an \"illegalparameter\" alert has to be sent. I created a PR to add a note about this: URL There is, however, the question why whether we should omit some of the legacy fields for use with DTLS 1.3 (not in the ClientHello because it would be needed for backwards compatibility) but for the ServerHello/HelloRetryRequest.\nFixed"} {"_id":"q-en-dynlink-c60cb4d22b3d0fc0e790f4866d7f825a53771acd559666e4d3c74c3761e5a0d3","text":"NAME NAME I've created a pull request to incorporate the exec binding method in the draft. This brings it in line with and also now has a reference to the draft itself. It would be great if you can quickly review the proposed text and see if you are ok with it\nNow merged, after review comments received from NAME to highlight differences between the Push and Exec binding methods"} {"_id":"q-en-dynlink-2019adef54feecc9e8a7945d5fc6048498c2168bbc36bc538acd322d853837d3","text":"Signed-off-by: David Navarro For the reasons stated in .\nHi, OMA LwM2M is using Minimum Period (pmin) and Maximum Period (pmax) attributes. The DynLink draft has this requirement: In its version 1.0, LwM2M had a different version of this requirement in the form: Thus having pmax equal to pmin was allowed. In LwM2M 1.1 we aligned this requirement with the DynLink one, making pmin equal to pmax forbidden. However, it was that having pmin equal pmax is a valid use case if the client wants the notification to be sent exactly every N seconds. Is there a strong opinion on forbidding pmax to be equal to pmin, or are people open to changing the Dynlink requirement ? Regards\nNAME After the discussion at IETF 109, I think we can agree that there are no strong opinions against allowing pmax to be greater than or equal to pmin in the draft. However in order to avoid any ambiguity in interpretation, do we want reflect the discussion so far, so that the draft has pmax == pmin, specifically for this use case, ie when this condition is met, then the CoAP server sends (is it a MUST/SHOULD/MAY?) the resource representation every N seconds? Also how does the client then cancel this request?\nClosing this issue, with the draft allowing pmax to have equality to pmin"} {"_id":"q-en-echo-request-tag-64f273f72342570165cba971b6da6ecb1f205c517364756dcb80064deabad9d3","text":"\"The server MAY include the same Echo option value in several different responses and to different clients.\" Clarification based on Jim's review."} {"_id":"q-en-edhoc-e0b78eb3a15b63b372b0be65ee24376df04b123487e916b62ff8df9b69256ecf","text":"Looks good.\nThe current draft has the following text in several places: \"If any verification step fails, the Responder MUST send an EDHOC error message back, formatted as defined in {{error}}, and the protocol MUST be discontinued.\" I am not sure this is a good idea. This means that the availability of EDHOC will be low. An attacker can e.g. send a single byte as message_4 and it would make EDHOC shut down imidiatly. I think we need to soften this and give the implementation some more choice here. Are there constrained radio protocols where noice could be mistaken for an actual message and forwarded to EDHOC or do all constrained radio have strong enough CRC to make sure that that more or less never happens?\nI just changed the following text OLD: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHALL select the most preferred cipher suite of those. NEW: If the Initiator previously received from the Responder an error message with error code 1 (see Section 6.3) indicating cipher suites supported by the Responder which also are supported by the Initiator, then the Initiator SHOULD select the most preferred cipher suite of those (note that error messages are not authenticated and may be forged). Made me think about that I don't think the group has discussed if some error messages should be authenticated. Some error messages to message2 could include a MAC to prove that the error was sent by the same party that sent message3. error messages to message3 and message4 could be authenticated. I am uncertain if this is worth specifying. At least not as long as there are easier ways to an attacker to end the protocol. I think what needs to be done is to specify that a processing error does not need to discontinue the protocol, the received message can have been an attack. receiveing an error does not need to discontinue the protocol, the received error can have been an attack.\nfrom 20210422 interim: CB perhaps wisely said: \"Use cases exist for multiplexing over same port/URI, so applications should normally sort/demultiplex traffic before start interpreting as EDHOC, which implies treating invalid messages that get to EDHOC code as an error or attack\"\nYes, my understanding is that we only need to be concerned about attacks where an attacker sends a packet crafted so that it will be processed by EDHOC (which is not hard).\nI made a pull request addressing this issue Added text on demultiplexing. Changed procession so implementation are not mandated to shut down the session if an attacker sends a few bytes to the EDHOC resource / port.. Added more security consideration on Denial of service.\nWriting this PR made me think about whether we want to mandate that error messages are always fatal as well as downgrade protection. The \"\"Wrong selected cipher suite\" error is fatal because it is sent in repsponse to message1. You could consider similar types of error sent in response to message2 and message3. E.g. IDCRED format not supported or CRED format not supported or X.509 public key algorithm not supported. [BBFGKZ16] defines the following theoretical downgrade protection for a session: The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack. But this definition has very little to do with practical security. In practice it does not matter if an attacker can influence the chosen parameter in the current session or a following session. I am still confused regarding the purpose of the applicability statement. Is it a informal section describing things the parties need to both support to make the protocol work or is a normative security section relevant for policy and downgrade. If we want downgrade protection over multiple sessions, the parties need to agree or securely negotiate everything. Downgrade protection for a session does not make much sense to me if an attacker can force the parties to set up a second session with lower security.\nDo we need to align the terminology, we use both \"session\" and \"instance\" in the draft?\nYes, I just changed \"protocol\" to \"session\" not remembering that the draft use instance in other places.\nWe can resolve this later, I opened . Unless there are other comments we will soon merge this.\nThe PR has been discussed and merged. Closing\nCurrent specification mandates sending error messages upon certain failures. But in cases when message processing fails when no state has been allocated or when no connection identifier has been found or then a more appropriate action would be to just silently drop the message.\nI think the MUST (if possible) is good. Debugging without any error messages is not nice. We now other protocols that it is good to mandate error messages. It is kind of obvious that can not send an error if you don't now where/how to send it. Don't know if that has to be pointed out....\nMigth be good to not sent error if you are under a DoS attack.\nThis issue is addressed by pull request that also address availability\nThe PR has been discussed and merged. Closing"} {"_id":"q-en-edhoc-dd68cae5bf5de723c95db588a8e59b11f8f0e635ba0d15e8747d14993bb499ef","text":"(Last two commits on wrong branch.)\nThe prototype of EDHOC-Exporter() has remained EDHOC-Exporter(label, length), while it should be EDHOC-Exporter(label, context, length) as per Alternative 3 discussed in issue URL Correct?\nThanks NAME your comment is addressed in the latest commit. I merge this now.\nThe text use both the terms \"session\" and \"protocol instance\", would be good to settle for one term.\nJust chose one, I don't think this needs an issue\nIn the security considerations \"session\" makes most sense to me. There is also the term \"connection\". I will make a proposal.\nThe spec should likely forbid multiple calls to the EDHOC-Exporter interface with the same label, in order to prevent the same key being reused. Discussed with NAME and NAME\nGood catch. Yes, we should have some text about that, but it should probably more be a requirement on the application using EDHOC that on EDHOC itself. Two different application should defintly not use the same label. The same application might rederive the same key several times as long as it keep track of its AEAD nonces and replay window. I thought that we had already IANA registry proposed for this, but that is not the case. We should probably have one. If we have a IANA registry the labels get more fixed and it might be that we need a context input parameter as well.\nIf we make a IANA registrer for labels, I think we need to let the application add additional information in some way. Two alternatives: We make an IANA register for label prefixes, and let the application append information to the label prefix. This was recently planned for EAP-TLS (and methods built on EAP-TLS) but was changes because the TLS export labels are not really ment to be labels. This would not change the current exporter interface. We expand the interface with a context parameter. This is similar to the TLS exporter. A problem with the TLS exporter is that some implementations decided to not even support context.\nNAME how is the context parameter intended to be used? Could you give an example? Would this be some sort of \"salt\" when deriving the key?\nDon't register labels, register label prefixes.\nThe motivation for \"context\" or \"label postfixes\" is to increase the flexibility for the application. Restricting the application to IANA registered labels is very limiting. An application might want to use the EDHOC exporter to derive a number of keys and we should encourage that. The alternative is that the application needs to implement its own key derivation, which might be less secure and might be outside of the TEE. I called it \"context\" just because TLS calls the paramater in its exporter context.\n\"Be like TLS\" is of course a good motivation. Technically, a label prefix is not different from a context, it just saves the need to separate prefix and suffix.\n(And of course there is nothing wrong with multiple calls with the same label; the application has a \"right to forget\" :-)\nThe solutions should be equal when it comes to security. The choice should be made based on what is easiest for implementators and apllications using EDHOC. Alt 1. We make an IANA register for label prefixes: \"OSCORE Master Secret\" \"OSCORE Master Salt\" The Export function stays the same: EDHOC-Exporter(label, length) = EDHOC-KDF(PRK4x3m, TH4, label, length) An application is allowed to append things to the label prefix: \"OSCORE Master Secret\" \"OSCORE Master Secret PickleRick\" Alt 2. We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, [label, ctx] , length) An application is not allowed to append things to the label and uses the ctx instead: label = \"OSCORE Master Secret\" ctx = \"\" label = \"OSCORE Master Secret\" ctx = \"PickleRick\"\nNo major opinion. Alt 2 breaks the test vector for the OSCORE security context. It would be good to get more input.\nAny preference NAME NAME NAME ?\nIf desired the CTX is more convenient it can always be added in a wrapper on top of EDHOC-Exporter(). (I assume re-use of label+ctx in Alt 2 is equivalent to re-use of label in Alt 1?)\nI prefer to have explicitly specified. We can pass to the concatenation of and . When is undefined/empty, test vectors are not affected. I guess this is what NAME is referring to?\nlabel is a CBOR byte string. While alt 2 use an CBOR array. Definging alt2 as a wrapper seems more complication in my view.\nWould be good if people stated alt 1. or alt 2. or made a concrete suggestion for alt 3.\nAlt 3 (CBOR sequence) We make an IANA register for labels: label = \"OSCORE Master Secret\" label = \"OSCORE Master Salt\" The Export function needs to change: EDHOC-Exporter(label, ctx, length) = EDHOC-KDF(PRK4x3m, TH4, labelcontext, length) where is a CBOR sequence: If you want to keep the test vectors intact, then you could make optional in the CDDL, but not sure how to specify an optional parameter in the interface definition.\nThanks. That is a valid alt 3. but ctx should probably be a bstr\nI don't think keeping the test vectors intact is an important goal at all. Let's decide on what is simplest for a user of the exporter interface.\nSomething like that, yes. Later post do differentiate the type of \"label\" and \"ctx\" and in light of that and changing test vectors Alt3 seems clean.\nI am fine with alt 3. I can start making PR for that unless somebody has different ideas. I don't alt 1 is a good alternative anymore as is restricts context to being a unicode string. Which would require some base64 encoding or similar for a general context.\nPlease review alt. 3 as specified in . Unless there are any comments we would like to merge this.\nThis is included in -07 so I close the issue.\nComment by Martin Disch in URL \"The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed.\" Should we provide more details about COSE constructs, e.g. in appendix A.2?\n(pasting my response from the mailing list here for completeness) Thanks for following up on this. Yes, in my opinion expanding on the COSE constructs would be helpful. But I also sympathize with not wanting to duplicate what is already defined in another RFC (and possibly getting it wrong, as Carsten points out). Considering that there are quite a few implementers already and besides mine you have only received one other similar comment, maybe this isn't an issue for a meaningful majority. In any case, the draft has made great strides already in terms of accessibility and is in a good state now, so I don't see improvements in this area as an absolute necessity.\nThe two COSE constructs that are used in EHDOC, and are fairly simple. They don't use the recursive structure or embed . I think it is possible to add some additional clarifications on the structure of the COSE messages in the EHDOC draft without having to copy too much text from RFC 8152. As part of a thesis I built a proof-of-concept implementation of EDHOC in the Rust programming language available at URL The naming of the repository is due to the subject being both EDHOC and OSCORE on embedded devices, but the following refers only to the EDHOC implementation. It's based on draft-selander-ace-cose-ecdhe-14 and as such, sadly, outdated. Since it was just one part of the overall work, I only managed to implement the subset I needed, which is RPK and the signature-signature based scheme (corresponding to what draft-selander-lake-edhoc calls method 0). I also took some other shortcuts, such as not supporting auxiliary data. Fortunately, test vectors were about to be introduced just around the time I was working on it, so I've been able to successfully verify my implementation against them when John shared them with me for some preliminary testing. Since I didn't have any prior experience implementing this kind of software, I would definitely consider it more of an experiment, but it's out there and it works, in case it can be useful to anybody. I'm convinced that EDHOC will be valuable particularly in the context of OSCORE, possibly even beyond that. As for the draft itself, I would like to mention one aspect that I think could be improved from the perspective of an implementer. Part of the promise of EDHOC is that it's simple and lightweight due to reusing COSE. That is certainly the case, but breaks down somewhat when working on a system where no fully featured library for it is available. This was the case for me and I personally found it difficult to quickly figure out which aspects of COSE I needed to implement. The draft does a good job of pointing to the right places in RFC 8152, but I still ended up trying to understand another ~100 page document just to implement the few constructs I actually needed. I know the authors are aware of this and there has been some effort to inline more information directly in the specification, which I greatly appreciate. It would be very nice to have an entirely self-contained document to work with. But I have to again point out my lack of experience in that area, it's entirely possible that this draft is in fact much better than others in this regard and I just don't know it yet. With that I would like to support the adoption and am looking forward to seeing how it develops. Kind regards, Martin"} {"_id":"q-en-edhoc-c88b88987464c8005000518bf3fcf6ba5750aef9e9771cec68ff0f861ab109cb","text":"CRED_R - bstr containing the credential This is not true for C509 which is an CBOR array"} {"_id":"q-en-edhoc-f00d218684a9a13d2bbeea0aff2b1713a3a711564ef48d4034bb422db34b6ec3","text":"Signed-off-by: David Navarro\nNAME NAME and others: Any response to John's comment?\nNAME I included John's suggestion in the PR.\nLooks good to me. Other opinions?\nAs specified in section , the payload in some CoAP messages can be an EDHOC message prepended by either a C_x identifier or \"true\". The draft states: However, the payload is actually a CBOR Sequence as defined by . Thus, we could define and use a new media type \"application/edhoc+cbor-seq\". Note that the usage of the Content-Format option is optional.\nMakes sense to me. NAME NAME others: Comments? Good that you bring up the optionality of Content-Format, we should stress that in this text. The discussion in Hackathon also reminded me about the common-option-compression which NAME presented in the May 12 interim 2021 - how to replace the Uri-Path: \"/.well-known/edhoc\" with a, say, integer valued CoAP option.\nMakes a lot of sense to me, too.\nI find this distinction good and the solution consequential, still open for comments."} {"_id":"q-en-edhoc-269ae1a707f2b310b2f8ed05977da108442446c03ae58df4ade916a4b40f6a24","text":"A static key should not be used both for X25519 and Ed25519. This PR corresponds to .\nPlease review the proposed change.\nThe standard does not restrict private key usages. In particular, it could be tempting to use the public/private keys for the ed25519 signature scheme as a public and private key pair for the static authentication mode with the elliptic curve X25519. This does not have clear direct and practical implications. However, this usage falls outside the existing cryptanalysis for the signature scheme, and is a bad practice from a theoretical point of view. This would require a dedicated cryptographic analysis, similar to [2]. [1, Section 12] typically recommends that different keys are used for different algorithms, but it may be unclear if the EDHOC use case falls under this recommandation, or if an extra one could be pertinent. We recommend to mention this issue somewhere in the standard, or explicitly forbid such a shared key usage. [1]: URL [2]: URL On using the same key pair for Ed25519 and an X25519 based KEM. Erik Thormarker.\nI've seen single keys used for both ed25519 and x25519 during plugtests; IIUC the agreement among the plugtesters back then was that this was not intended use and should generally not be done. This might serve as a data point that illustrates the need for a clear statement.\nMentioning this issue and refering to [1, Section 12] seems like a good idea. The text would need to mention EDHOC methods as [1, Section 12] only talks about algorithms and the ECDHE in EDHOC is not strictly a COSE algorithm. Writing that using the same key requires a dedicated cryptographic analysis, similar to [2] also seems like a good idea. (As a comparision, I don't think TLS 1.3 states anything about forbidding the public key in the certificate to be used for Elgamal, ECIES, or ECDHE outside of TLS).\nShould we propose a PR for this?\nNAME Please go ahead. Prepare for the option to have a reference to an analysis showing this is secure under certain conditions."} {"_id":"q-en-edhoc-f0fbe4dfaa52da4a3ce2d246f8cd2d89337fc4e47d7e88f79b95d194a2adbfbf","text":"Reformulating security of expand as stream cipher instead of removing. HKDF-Expand is very often used.Added note of HKDF-Expand length restriction.Key use for different algorithms is already covered by COSE. Nothing is special with use inside of EDHOC. The text is clearly not correct at as signate can also be used in other methods.\nChanneling from Erik. Use of the HKDF as EDHOC-KDF for generation of KEYSTREAM2 and encryption of message2 induces a max size. For example, HKDF-SHA256 has max output size 255*32 = 8160 bytes."} {"_id":"q-en-edhoc-f0aba156749dc0726c890222f94977367f8f6bdc42e3937cdc55b7db3a5a7761","text":"I think it would be good to point out early that the EDHOC protocol (just like TLS) only provides proof-of-possesion. Authentication has to be done by the application. I have seen horrible uses of TLS where the implementors believe RFC 8446 when it says it provides authentication.Added text for \"Unauthenticated Operation\". As pointed out by Marco, this is more general than just TOFU. \"Unauthenticated operation\" is the term used by TLS 1.3\nI think this is ready to merge.\nThe selfie attack mitigation implies that an initiator is stopping the exchange if it receives its own identity. An active attacker can then use this behavior to test if an initiator identity is the same one as a responder identity. In turn, it implies that the initiator anonymity does not hold against an active attacker when this mitigation is enabled. To judge if this is an issue in practice, the following question should be answered: Are there scenarios where an identity is shared between an initiator and a receiver, but the initiator identity should still be protected? One could maybe think about the case where multiple initiators and receivers are on the same server, sometimes sharing public keys and using distinct ports, but it is expected that the initiators identities are still protected. Possible actions: mention privacy loss due to the selfie attack mitigation. update the selfie attack mitigation, by mentioning that initiators should conclude the exchange as usual, but can check at the end whether they are talking to themselves or not. remove the selfie attack mitigation. Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, and the initiator will be talking to itself while it believes to be talking to the receiver. When there are only public keys, as in EDHOC, it is unclear to me if what is currently called in the draft is indeed a selfie attack. An initiator talking to itself knows it and does not believe to be talking to some other identity. As such this is why it could be possible to either remove the mention of selfie attacks in the draft, or update it.\nI and R are expected to stop the exchange it they receive any identity they do not want to talk to. This likely includes their own identity. Good point that this leaks information. I think that should be added, but I don't see that the own identity is special. >Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, >and the initiator will be talking to itself while it believes to be talking to the receiver. >When there are only public keys, as in EDHOC, it is unclear to me if what is currently >called in the draft is indeed a selfie attack. The selfie attack mitigation is intended for Trust On First Use (TOFU) use cases, where there is no authentication in the initial setup. I don't think an initiator talking to itself in a TOFU use case would not know it except if they checked R's public key. Do you some better suggestion than \"Selfie attack\". I agree that it is not perfect. \"Selfie attacks\" has in older papers been called reflection attacks, but the term reflection attacks has also been used for very different attacks. One option is to not give the attack any name and instead just describe the attack.\nI think I like to not give any name, as it does not perfectly match the classical definition. Here, the property which is violated is \"If an EDHOC exchange is completed, two distinct identities/computers/agents were active\". Which is different to violating the authentication of the protocol as is the case in classical selfie attacks. Good point. So what we are talking about in this issue is a slightly different privacy leak, which happens when the set of parties you are willing to talk to is equal to all responders minus yourself. To be more concrete, let me try to put it in a kind of drafty formal description that could replace the existing formulation. First the existing formulation: Now a possible replacement:\nThanks for the input! This issue is related to the TOFU use case which is still to be detailed (Appendix D.5). We probably should wait with the update until we have progressed that.\nRelates to\nI made PR for this issue. This is also related to PR and PR . The suggested text in is largely based on the suggested text above but I made some changes. I did not think this fitted well with the rest of the draft. The sentence was also not really needed. This is only true in the case where the set of identifiers is a small finite set. In the general case the set of identifiers the Initiator is not willing to communicate with is infinite. I suggest to add the following recommendation for TOFU To not leak any long-term identifiers, it is recommended to use a freshly generated authentication key as identity in each initial TOFU exchange.\nI merged the PR to master. Keeping the issue open for a while.\nThe current text is not really explaining how TOFU works and the security considerations. Would be good with some more text. TOFU might also go agains requirements in other parts of the document that says that \"EDHOC is mutually authenticated.\nThis is conflicting with present use of COSE. Requires a separate section in the draft, preferably an appendix. We need to describe the use case, trust model, how to use EDHOC and the security properties. Can we reference some other RFC?\nLooking at adding text on TOFU I think there is some more work to do. I don't TOFU should require anything specicial. The Section \"Identities and trust anchors {#identities}\" is fluffy. It is not clear that this is just general recommendations that is outside the scope of the EDHOC protocol and the reposability of the application. EDHOC provides proof-of-possesion and transfers a credential that enables authentication. Thats it. An EDHOC implementation might implemend Authentication for a specific use case, but that is still outside the EDHOC protocol. Any description of the interface between EDHOC and the application when it comes to credential is missing. In general EDHOC provides proof-of-possesion of the private key and then gives CRED_x to the application that says YES/NO. How the application does chain validation and identity validation (or the lack of it like in TOFU) is outside of EDHOC.\nStatus: This issue is waiting on a restructure, shortening, clarification on the responsibility of the EDHOC protocol and the application. and\nThe restructuring is done and merged with the master. Appendix D.5 is placeholder for TOFU.\nI made PR for TOFU. As Marco pointed out this should be more general than TOFU. I changed the heading to \"unauthenticated opearation\" which is the term used by TLS 1.3\nMerged adding text to the empty TOFU section. Closing\nI made some minor updates, have a look."} {"_id":"q-en-edhoc-c89125385b5fe84d1696c113efe15f44ea5c18b02e6f68f23a8fbc3a139dd205","text":"Addressing\nIssue needed.\nShould we allow or forbid that eadlabel = k and eadlabel = -k correspond to different ead-values? For example, if an (eadlabel, eadvalue) is always critical: Say, eadvalue is an ACE Access Token with eadlabel = -4, always critical. Then we can either: forbid the use of eadlabel = 4 altogether, or allocate an always non-critical EAD to eadlabel = 4, say, eadvalue is \"service indication\" as in In case 1 we will have some \"holes\" in the register. One positive integer blocked for each always critical (eadlabel, eadvalue) , one negative blocked for each always non-critical (eadlabel, eadvalue), and 0 cannot be used. (It is not clear to me that there are many always non-critical, but always critical is probably common.) In case 2, we could still have the registration policy that: if an EAD can be either critical or non-critical then two eadlabel with same absolute value should be reserved for it. What is the main risk with case 2? If someone misunderstands or by accident changes sign on an always /non-/critical EAD this results most likely in an error due to wrong eadvalue (like in the example above), but potentially in an unintended EAD message. (Another registration policy could be that the eadvalues corresponding to ead_labels k and -k should have incompatible CDDL so the error is discovered ...) The EAD specification needs anyway to specify what are the processing rules and if you violate those more or less anything could happen. Comments?\nBTW, we may want a term for the ordered pair (eadlabel, eadvalue). The term \"EAD\" or \"EAD field\" is referring to the message field, which may consist of multiple (eadlabel, eadvalue).\nJust seems like overoptimization to me. I would prefer if we follow the model that we agreed on for C509 which is similar to what CoAP Options is using. If you think it is important to use the complete code point space I think it would be better not assign any functionality to the minus sign and register critical and non-critical idenpendently. The original proposal seems to give two different meanings to the minus sign. The minus value is either a criticallity toggle or a completly different EAD.\nIntroducing the term \"EAD item\" for (eadlabel, eadvalue)\nEdit: Let's include the main points of this PR in -15 as a input to the design team meeting.\nI made a proposal. The text now include the concepts \"EAD item\" and \"critical\"/\"non-critical\" EAD item. There is also more details about how applications makes use of EAD: each application registers its own eadlabels eadlabel is a positive integer associated to a particular eadvalue if the application transports different eadvalues then multiple eadlabels need to be registered the application specifies how EAD items are transported in EDHOC messages, a particular EAD item may be transported in multiple messages (i.e. different EADx). I'll merge this into -15 and look forward to comments!\nThere is a new PR suggesting this. There is no background discussion in the PR . There would be good with some use case. The definition of what critical need and how the recipient handles critical would need a lot of discussion and fine-tuning.The handling of critical extensions in C.509 is to my understanding confusing and problematic. Both in the specification and in implementations.\nPlease see for background discussion and use cases.\nSeems very unclear that the current PR can be the basis for a ECH type mechanism. PR should motivate why it is the basis for an ECH type mechanism or we should have some discussion regarding use cases for a general critical non-critical EAD mechanism. I do not see that follow from .\nHaving critical/non-critical EAD seems like a reasonable thing in its own, but it is not clear to me that it is a step on the way to a ECH mechanism in EDHOC.\nShould we add another column in IANA register? Type: always critical / always non-critical / application determines criticality\nIf we define such a Type, the following text can be removed from specification and replaced by IANA consideration: \"A specification registring a new EAD label MUST describe if the EAD item is always critical, always non-critical, or if it can be decided by the application in the sending endpoint.\"\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing. The consequences of EDHOC processing of critical / non-critical EAD items is described, but how the security application makes use of this property is left to the associated specification. (In particular, no registration of criticality in the IANA register.)\nThis is motivated by two considerations, the concerns for clients selecting a hidden service in a protected way (pinging NAME do you have a pointer to your hackathon project), and TLS's encrypted Client Hello (superseding eSNI, encrypted server name indication). The rough situation for either case is that a client has some out-of-band information about the server that includes a public key for encryption (even if the encryption key on its own may be insufficient to authenticate the server). As a straw man proposal (mainly as scaffolding for the \"how to enable it\" steps later), consider the following: (I bet that reading the will give valuable input for making this complete). To ensure that responders (who are, in their identity selection, the more exposed side) can't be probed for their support of encrypted EAD, encrypted EAD should be marked as \"elective\" in the sense that a server may ignore it if it does not support it. (It'd still become part of the transcript). If a server receives an EAD it can not decrypt (or which, after decryption, contains nothing valid, eg. because the wrong GE was used), it needs to just continue and pick the CREDR / IDCREDR it would always have picked. AIU we do not yet distinguish between EAD that are elective and those that are critical. I do not know whether we'll even have any \"critical\" ones (as the peer's lack of action on them should be enough indication that they were unsupported), but if we do distinguish (eg. along the positive/negative number line), encrypted EAD should be optional. This has the upside that initiators that can spare the few bytes can always include some random bytes in the encrypted EAD, and thus ensure that actual users of encrypted EAD do not stand out (or that connections with encrypted EAD would be blocked). It should be a design goal of encrypted EAD that any use of EAD can always be plausibly claimed to just be random bytes added because the specification says one should do that ever now and then. For a practical example, a vulnerable service (say, anything civil rights related) might be operated at the same IP address as a more wide-spread service (say, a large public LwM2M provider). The civil rights service would publish its address together with the public key GE and its service name in DNS records similar to . A client contacting it would start an EDHOC exchange with the public address and encrypt the service name with GEX in the EAD1. The server would respond with credentials for the civil rights service, to which the client then may either present an anonymous (CCS) identity, or (now that it knows it's speaking to the right peer) its own CREDI / IDCREDI. If the CRED sizes are always the same, this exchange should be indistinguishable from an exchange done between an LwM2M client and the LwM2M backend when the client has chosen to send some more garbage bytes. (For application such as eighthalve's, it should be noted that there is probably no need for any further proof of identity inside the EAD1 plaintext: Possession of GE in combination with the suitable plaintext should suffice to open the present alternative server credentials, any secret whose possession is proven in the plaintext part would be exactly as secret as GE, as no attacker without E can obtain the plaintext.). To condense this all into a few actionable points: We should have a code point in eadlabel for \"encrypted EAD\" data. (We may not even have to specify how it works, that can be specified later or even be guided by the out-of-band information that needs to be present for all of this anyway). That code point either needs to be described explicitly as \"MUST ignore if not supported / content not understood\" for all of EDHOC from the start, or there needs to be general wording about which ead_labels are to be ignored if unprocessable. We should encourage initiators to send some random garbage there if they can afford it. (A LoRA device certainly can't, a 6lo device probably neigther, but a guard proxy establishing a connection probably can). This is not only to protect initiators that need it, but also to ensure that it is not being blocked, and that responders don't reveal whether they support it.\nThis has parallels to , which will take some work to pick out in detail.\nI'm trying to understand what would need to be changed in EDHOC to support this functionality. The distinction between critical and elective EADs. This seems like a good general construct. Perhaps by designated subsets of labels, e.g. negative or even eablabel are elective. The use of a designated eablabel for encrypted EAD1 seems less general and is somewhat breaking with the mindset that one application is reserving a label and defines the associated processing. Is there a problem with specifying a particular (elective) EAD for this application and associate a policy that it is used by default, with garbage when there is nothing to send? (I couldn't see obfuscating eadlabel really achieved much in practice, but I may have missed something.)\nThe first, yes. As for the labels, if the application reserves a label and defines the processing, it is showing a passive attacker that it is being used; this may be undesirable. Applications should have some label behind which they can hide the fact that this application is being used; I think it is preferable if that is shared by as many applications as practical (including the \"it's really just garbage\" application that some hosts may want to run). Whether that requires that they also share an encryption mechanism or not, I am unsure.\nNAME at the IETF Hackathon, we put together a working proof of concept for URL Is that what you mean? That's not really related to ECH, but came out of MASQUE.\nNAME The three-party setup seems very useful in different settings, so there may be multiple use cases where R receives EAD1 intended for a third party and needs to know that to do with it. Currently eadlabel is filling that function. If we can assume that R by default has designated third party, or sufficient information about the third party is carried in EAD1, then eadlabel could probably be encrypted. (In principle, EAD1 could be a COSEEncrypt0 or any tagged COSE object, with information needed by R in the unprotected header.) So how do we handle the case when there are multiple candidate third parties and you want to hide both the intent and the third party identity? A designated eadlabel is possible (or two, if critical / elective is baked into the label) but I don’t see how this label will be used other applications (so to acheive uncertainty about what is intended) because the co-existence of different candidate applications using EAD1 requires a method for R to distinguish between them. Maybe I'm missing something?\nFor the attacker to be unable to know which service the user is contacting, it would be important that an initiator receiving the credentials to the wrong service name fails in the same way has an initiator for which verification of the responder signature would fail. Otherwise, an active attacker could test if I intended to use the protected service by xoring the encrypted EAD1 with some random bytes. The responder would always see EAD1 as some random bytes and answer with the unprotected service, and then, if the initiator wanted to talk to the protected service, it will fail after decrypting the message and seeing the wrong service credentials if the initiator want to talk to the unprotected service, it will fail during the verification of the signature as the two transcript will not match. Those two failures should then be indistinguishable from the attacker point of view (even with timing attacks).\nAt least critical / elective EAD should be included in -15\nThe critical / elective logic could in principle be implemented in each application. It would be good to provide more use cases to support the case of making this a standardized feature. Clearly there is a benefit with tagging an EAD as \"critical\" since some operations require the information carried in the EAD to complete, e.g. authorization information from a trusted third party may be required for the Initiator to make any use of the authentication. But what EAD content is not critical? Is the following a good example for elective EAD? A service provider is allowing a variety of devices in its network to authenticate using EDHOC and grants different access rights depending on the capabilities and trustworthiness of the device. Later, the IETF specifies the EAD for remote attestation (see Appendix E). New devices which implements this EAD and appropriately attests to the attestation request carried in the EAD field get certain access which old devices doesn't get. But the old devices should still get basic access. If all EAD were critical then the old device which doesn't understand the new EAD would be forced to discontinue the protocol at the arrival of the remote attestation EAD. If the EAD is elective, then both old and new devices can be supported. Question: With this example in mind, is critical / elective a property of the EAD type, or isn't it rather a boolean set by the application? I can think of another use case where remote attestation is mandatory and it doesn't make sense to define different EADs for these two use cases. (Then we are somehow back at the application specified property. But it may still make sense to have a standardized way of expressing criticality.) Thoughts?\nThe critical/elective property always needs to be encoded in the message (as an LSB, or as the number's sign) no matter whether criticality is the sender's choice or whether it's part of the registration. From a recent chat I think that registering one number, and then allowing it to be used in positive (elective) or negative (critical) role as by the sender's choice. Application authors would describe in which criticality/-ies it makes sense to use their number, just as they describe in which of the messages it makes sense. (The alternative is to allow registering elective and critical options independently, and that application authors who do need to give the senders a choice of using either would register a pair of options; really would work just as well). Introducing RATS data seems like a very good use case to illustrate EAD examples. (In this particular case I don't know why a sender would insist the attestation to be processed into its network joining). To rephrase the original use cases with recently published terminology, we can refer to . That reference might go with a suggestion that devices that can afford sending a few more bytes should occasionally do so in a grease-style non-critical option, or maybe in a non-critical Encrypted-EAD1 option right away. I'd welcome if ace-ake-autz already defined an EAD field for what could resemble Encrypted-Client-Hello; AIU it currently refers to an older version of EDHOC. It could be updated to place LOCW and ENCID into separate EAD items, and the latter could be more generally defined as \"a container for encrypted data which the responder, possibly guided by other EAD1 fields, can use to come up with a suitable IDR\". (How encid is then constructed would not be a property of using that particular option, but follow from using the LOCW option). Then, there would be no need for a defined Grease field, as that ENCID field could serve the very same purpose.\nSince there seems to be a case for non-critical EAD I made PR . Please review. (Yes, ace-ake-authz needs to be updated, following the closing of this issue.)\nI did not read this discussion before now as it did not seem relevant for the base EDHOC protocol. The PR seems quite far from the use case described here. PR described a general critical, non-critical mechanism similar to X.509 extensions while the discussion in this issue is about an ECH type of mechanism..... PR does not seem to be the basis of a good ECH mechanism.... NAME could you have a short presentation at LAKE IETF 114? (edit: fixed Christian's github name)\nI think it is clear that PR does not solve the use case outlined here. In particular, the comment from NAME above is not considered as far as I can see. But the point about critical / non-critical (eadlabel, eadvalue) appeared in this discussion and merits to be specified as a general feature.\nI think it would be good with longer discussion about ECH type mechanisms in EDHOC. Might be added on later but it could also benefit from being added alrady in the base EDHOC specification. For a use case where the Initiator knows the long term key of the Responder the TLS ECH solution is a quite ugly hack that uses two different keys and an ineficient message flow. The use of a single key as discussed above is an improvement, but the use of a noise XX pattern is not optimal. If you want to design something for the use case where the Initiator knows the long term key of the Responder it makes more sense with noise IK which is used by WireGuard. If the use case where Initiator knows the long term key of the Responder is small the above approach makes sense. If is is big it might make more sense to define new method(s) based on the noise IK pattern.\nCould we have a design team meeting or an interim to discuss this issue?\nIn the use case that the Initiator gets the Responders static public key beforehand The Responder would in this case have a static DH key. The Initiator could use Static DH or a singature.\nPerhaps a breakout meeting during/just before IETF 114?\nClosing this since NAME agreed that the introduction of critical/non-critical EAD (PR , included in -15) in combination with an example using third party encrypted EAD (as in ace-ake-authz) resolves the issue."} {"_id":"q-en-edhoc-e199f2edf9f10cf92a0c9f84bd86459f1cd718b8d3490ed82ed9aae17f065e4e","text":"URL I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.” So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems “credentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.” Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn’t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement?\nYes. The document should talk about RPK a bit more as that is term that people might search for. Should be explained that CWT is the format EDHOC uses for raw public keys (RPK). Hi all: I have been reading v17 and sent some comments to the authors, that I summarize here: 1) - Regarding RPK, previous versions of the document had a more detailed explanation about RPK. For example, in version v06 you had \"The Initiator and the Responder MAY use different types of credentials, e.g. one uses an RPK and the other uses a public key certificate.” So RPK is included as part of the definition of authentication credential. However, v17 may have a different meaning since it seems “credentials\" is defined as follows: \"EDHOC relies on COSE for identification of credentials (see Section 3.5.3), for example X.509 certificates [RFC5280], C509 certificates [I-D.ietf-cose-cbor-encoded-cert], CWTs [RFC8392] and CWT Claims Sets (CCS) [RFC8392]. When the identified credential is a chain or a bag, the authentication credential CRED_x is just the end entity X.509 or C509 certificate / CWT. I assume RPK is also an authentication credential and this should be included in this text. A clarification would be worthy. 2) I assume it is possible that any two IoT devices can act as initiator but also as a responder, even between same two peers. It is the application which decides when to start EDHOC (initiator) in a particular case, correct? 3) It seems the EDHOC does not define any exchange for the key update (some sort of rekey). It seems an application at some point exchange some messages as \"some event that triggered the key update.” Then the application call to EDHOC-KeyUpdate function to get a new PRK. Am I right? I am asking because looking at IKEv2, for example, and for the rekey, an exchange is required. Although it would add a little bit more complexity, wouldn’t it be sensical that EDHOC defines a 1 RTT to exchange a couple of nonces for the rekeying. Moreover, this function is defined in an Appendix. Is it mandatory to implement? Beyond these clarifications, I think the document is ready. Best Regards. Rafa Marin-Lopez, PhD Dept. Information and Communications Engineering (DIIC) Faculty of Computer Science-University of Murcia 30100 Murcia - Spain Telf: +34868888501 Fax: +34868884151 e-mail: EMAIL"} {"_id":"q-en-edhoc-082fb38c743d8d05e0f42a1a3c58b11bc4f92afb2e402b31c336313d6a0c2b87","text":"I strongly think that all 10 EDHOC-KDF should be in the same table The text about maclengthx need to use the defined term hash_length\nI made one commit for review."} {"_id":"q-en-external-psk-design-team-7e1119db1ab6ca52f916ccfec2dc73bc2049ab4cf70e87a6fc279a8f9ce2ceaf","text":"This builds on .\nSummary: use a high-entropy PSK with DH, otherwise, with a low-entropy PSK, use a PAKE. Reference appendix with security properties from NAME"} {"_id":"q-en-external-psk-design-team-bbf2fd1fd3712f7462f72bf169c12c18fac3db76a089894732b93bd291367337","text":"FIxed up: some nits on recommendations. I think the wording was a bit off on low entropy PSKs, so I lifted from ekrs text from earlier in the draft. some of the stack interface stuff on hints is not valid for TLS1.3 (hints are TLS1.2 and earlier), so removed it. Some prelim text on collisions. For completeness, will take a look at gnutls and wolfssl order too. Note, even with the OpenSSL callback sequence, the application still does not know if there is a collision...\nStarter text from NAME\nNAME can you please take this?\nPing NAME"} {"_id":"q-en-fec-e099b030d69f4011facfa36b3b18b70a5bf6349a9b335847aaebd859bb630211","text":"Ah, I see now. I think what would help is \"in contrast to methods like RTX or flexfec (when retransmissions are used) [I-D.]\""} {"_id":"q-en-fec-56e05d85ce9386491a7d09d7f84dec4411775ff42b22d221099ebf96169edbcb","text":"It's similar to the redundant encoding, but done within the Opus bitstream, rather than at the RTP level."} {"_id":"q-en-gnap-core-protocol-d0caac17c305044caf3a7e5f12a55b18ec2b9d119a23455cf87bee2daeeaeb1b","text":"The discovery fields for the interaction methods were still in a single list, even though those have now been split into start and finish sections.\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! :hammer: Explore the source changes: 754d971b07753b2b9580c2c6f0b3c8997c34b260 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYes, makes sense"} {"_id":"q-en-gnap-core-protocol-29977017836148c4db89647fad7e1bc89392f01c0f573806f8e9358e5132634e","text":"remove OAuth DPoP binding. Described why on IETF mailing list: only works for asymmetric keys, requires key be presented in the header (duplicating information from GNAP messages). It was never meant to be a general purpose signing mechanism, though the FAPI group in OIDF is considering it as an option in current proposed work.\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: c380eb747945a8002c19de9784e86bee713951a0 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nOk that's weird (just removed some lines...), will have a look\nOk that was due to my rust analyser in vscode, fixed it now"} {"_id":"q-en-gnap-core-protocol-2a1e1eec093752e6ee4a178fa3cc732cc485c7435c9001747b0dcea0ebddc59e","text":"The boolean value \"true\" is no longer used to specify supported interaction start modes.\nDeploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: ef948b6b0b53f92352ec617e6074174e0a2eaa3f Inspect the deploy log: Browse the preview:"} {"_id":"q-en-gnap-core-protocol-4bf0277751cbe2a32afeffc201882e0870d31da584a2c29e4eef301384ee9d47","text":"Deploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: 0d7a49a6be4a9740f850d2477b1f2462de781b10 Inspect the deploy log: Browse the preview:\nSection 2.2 is currently called: Then the text continues with: This means that this section is currently restricted to the RO. Since this call is restricted to the RO, the title of this section should be changed into: Requesting RO Information. In addition, the reason(s) for requesting such information is not mentioned and should be explained. As a side-consequence, a user has no on-line means to know which personal data the AS knows. This comment also impacts the content of section 1.4.6 (Requesting User Information). This topic has been first addressed under the issue which is still open. It would be wise to allow such an on-line access so that the user can know which attributes are known by an AS and then choose which attributes to insert into an access token.\nMaybe we'll need a bit of rewording, although I'm not sure the proposed changes help so much. I just think we should put more emphasis (and state early on) what is shortly described at the end of section 4, i.e. that here RO = end-user\nRO = end-user is not the general case. The usefulness of supporting Requesting RO Information is not crystal clear and should be indicated, but there is an interest to support Requesting User Information. While I agree with you that \"we'll need a bit of rewording\", simply stating that RO = end-user in section 4 will not solve the issue.\nI notice that states It also shows that there is no RS involve and diagram uses label: User (not RO) To my understanding End-user is \"logged in\" to the client instance. Given role definition: If we don't have RS involved, so we also don't have any Protected Resource here, does it really make sense to speak of Resource Owner?\nYes, that's why we talk about the \"user\" here. Notice that currently, this term is used when we don't know the exact role it will play in the protocol (end user or RO).\nIf that's the case for why does speak about Resource Owner? Again in 2.2 we also don't seem to have Resource Server or Protected Resource involved.\nThe rationale is : in the intro of section 4, gnap allows a large variety of cases. GNAP can handle UMA2 types of scenarii (hence 1.4.6 and a few other places) but in gnap core we mostly detail the case where end-user = RO. Hence what we explain about the subject, and it makes sense to limite the diffusion of confidential information about the RO (the most limiting case, hence 2.2 and a few other places) That's actually a quite subtle difference, but i think it makes good sense.\nGiven above definitions maybe it would make sense to broaden definition of RO to: subject entity that may grant or deny operations on resources or obtaining assertions it has authority upon. Since in the User acting as RO doesn't grant or deny operations on resources (served by RS) but only grants / denies obtaining an assertion.\nI agree that the specific language that NAME references should be tightened up. What's a little awkward right now is that GNAP's language treats access to an RS and access to information contained in assertions as protected resources, but it doesn't do so particularly cleanly due to several major text revisions that have changed things a little out of sync with each other. Both data from an RS (using an access token) and assertions/identifiers passed to the client are targeted at the client itself, so in that way they're the same nature, a protected resource after a fashion and both controlled by an RO of some kind. What's different is that the info passed back directly is usually assumed to be :about: the end-user, because that's what the client is calling about in just about all the use cases that are known. Technically all of the information, either from an RS or subject information passed directly, is controlled by the RO. The end-user can get involved through interaction and act as the RO. The case where the RO doesn't equal the end-user is useful in many situations but leads to unexpected weirdness when you're talking about the subject information coming back.\nFair point. We might start by listing what needs to be changed.\nI tend to think of the AS as having a separate service endpoint for RO requests because they are the ones that can edit AS policy. A service endpoint that is accessible to End User requests would be \"separate\" and could be used by anyone including the RO when they want the protection of least-privilege. This distinction may be unhelpful in the general case of explaining GNAP so I don't mean to overstate it.\nNAME when you say \"service endpoint\" are you thinking of something that's interactive and user-facing (so a webpage) or something that's a callable API?\nNAME I really don't know enough to answer purely technical questions. I think mostly in terms of privacy engineering and look to this group for security and devops perspective. Therefore, I focus on the AS as the primary, if not the sole, entity that stores and executes RO policy. The AS takes requests and turns them into access tokens. The requests might be via (RAR) API or a web form. Either way, this request interface is likely unsecured. Some requests present credentials while others might add an authentication flow. That begs the issue of how did the AS get the RO's policies to begin with, long before a request comes in? I would define whoever sets the AS policies as the RO (regardless of any particular RS-RO relationship). A so-called \"policy manager API\" has been under discussion in UMA for about a year because at least one implementer wants to author policies on a mobile app and standardize the way they are uploaded to the AS. Back to Justin's technical question, I'm trying to implement the shortest distance between OAuth 2 and GNAP in a way that I can understand and explain to others. I'm doing this in FastAPI (because I don't know much beyond elementary Python, because it has built-in OpenAPI documentation, and has some OAuth 2 and JWT functionality). I'm deploying (automagically) to DigitalOcean App Platform in order to be clear about my various security and build shortcuts. I've designed the demo as two separate FastAPI server domains for the RS and AS. A \"global\" CouchDB instance is deployed for RO policy storage and any state that needs to be saved as part of the protocol demos. -- URL (the AS) and running at https://ognap-URL -- URL and running at https://ognap-rs-URL -- URL The protected resource is going to be_ just a self-verifying vaccine credential (I invented) -- URL -- Some of this is running at URL I've been stuck trying to answer Justin's exact question for quite a while. I think I need to set up the AS (OGNAP) to deliver a PWA to the RO so they can edit policies locally in PouchDB and sync the polices with the CouchDB the AS can access when a request comes in. That forces me to learn URL or something like it. Aside from my long-term goal to upgrade the HIE of One Trustee demo from UMA to GNAP, I'm also hoping to show the W3C and DIF protocol groups how easy it could be to mitigate significant OAuth 2 human rights concerns using GNAP. Any suggestions, help, moral support would be really welcome.\nback to NAME and NAME : looking again on this. I guess we could define what we mean by operations. Getting an assertion is an operation, in my view. Or give it as an example. What do you think ?\nI think it comes down to whether or not we consider the \"subject information\", which includes both identifiers and assertions, to be \"resources\" or not. I always thought they are, but they are definitely a different style of resource than what comes from the RS so maybe we need better wording around it. We talked about it a bit in the long terminology discussion a year ago, but I don't think we ever really landed on a term that fully captures \"stuff sent directly to the client by the AS and not accessed through the RS\". Identifiers and assertions are the two categories where there's clear demand, but other things could happen through extensions as well.\nDoes this issue also have to align with authentication flows that build on GNAP? a subject presents a DID or other identifier to a relying party (RP) the RP dereferences the identifier to an AS they trust a request for assertion is made to the AS the AS issues a liveness challenge to the subject the AS returns an assertion and attributes to the RP (the attributes might be accessed indirectly via an access token returned with the assertion) How is SIOP planning to handle this?\nIn the current draft, section 2.2 : \"The AS can determine the RO's identity and permission for releasing this information through interaction with the RO (Section 4), AS policies, or assertions presented by the client instance (Section 2.4). If this is determined positively, the AS MAY return the RO's information in its response (Section 3.4) as requested.\" Currently, a resource is just a short cut for \"Protected resource\", which really refers to an API served by a RS. So we have a mismatch (or at least a missing case) in the RO definition (as pointed out by NAME and NAME I've tried some sentences to specify subtypes for resources, but it becomes very confusing. The proposal by NAME to define the RO as \"subject entity that may grant or deny operations on resources or obtaining assertions it has authority upon.\" seems slightly incorrect to me, because the assertion is not made by the RO, but by the AS. What the RO grants to the AS is access to its attributes (and then the AS may carry out further due diligence to derive assertions, for instance through AuthN flows as suggested by NAME The decision to disclose or not its own attributes is not restricted to the RO, it's actually something under the control of every subject. So here's what I'd suggest to solve the issue (new part in bold): update the definition of \"Subject: person, organization or device. It decides whether and under which conditions its attributes can be disclosed to other parties.\" question 1 : should we limit this to physical persons? (from a legal perspective, that makes sense, but then wouldn't an org want to restrict what it shares, cf issues around data sovereignty?). I think the principle could be generic. question 2 : I've added under which conditions because we already discuss that in the text (manual consent, automatic policy, etc.), plus it's not only a yes/no, it also concerns what those other parties can do with it, including further sharing those attributes, cf the case mentioned by NAME where the AS sends them to the RP. Feedback welcome !\nsection refers to the Subject exclusively as RO. I'm still uncertain if the subject who in interaction only decides to disclose some attributes in an but not any \"Protected resources\" (an API served by RS) is considered to act as RO. If that's the case what would be the case where the subject decides to disclose some attributes but doesn't act as RO?\nI'm not entirely sure about the use case you're targeting here. Could you be more specific ?\nMaybe I shouldn't have mixed two questions together. Let's focus on one of them first. section always refers to the Subject as RO. In your previous comment you mentioned that: When we would have a situation where a Subject is deciding to disclose its own attributes but they don't act as RO?\nIn theory section 4 allows other situations (at the beginning of the section). But the only case that's detailed is the case where it is indeed the RO.\nThanks, now for the second part of my question. In we find: In this scenario, while no RS is participating, so no Protected Resources are being accessed. We still have Shouldn't definition of RO should still be broadened from current Keeping in mind It might still be something in direction of subject entity that may grant or deny operations on resources or obtaining assertions containing attributes it has authority upon. In short, if RO plays role in . There seems to be more to that role than just granting or denying operations on resources (an API served by RS).\nMy view is that the attribute part is covered by the fact that the RO is a subject (obtaining them is covered by the subject definition now). The main rationale for the protocol is still related to access to protected resources, and I believe we would dilute that otherwise.\nIn that case is there any reason for to still include: I think 4 could be removed altogether and 5 could become (after renumbering) This section states in the first sentence that it does not involve RS and access to protected resources (an API served by RS)\nWe still need to make sure that the user is indeed the RO, hence step 4.\nI still consider that the information about the subject is counted among the \"Resources\" being released here. The only real difference is that instead of them being at an RS, they're passed directly to the client instance.\nYes, this didn't change. I'm only saying I don't think it needs to be embedded into the RO definition, since we can assume from the subject definition itself that some mechanism exists to grab the subject attributes data if it consents to. Changing the RO definition or worse the (protected) resource definition makes the entire thing convoluted, at least in every variation I tried.\nNAME Right, and I think I agree with that. The definitions should try to stay tight, if we can. But if the surrounding text, especially the \"subject information\" request stuff isn't clear, then this is going to come around again. I don't have a specific answer but I would be surprised if we can't improve this, since it's come up a few times.\nI am supportive of elf-pavlik when he writes: fimbault agrees with him when he says: When looking at Section 3.4 that is indicated in that section, the following text may be found. If information about the RO is requested and the AS grants the client instance access to that data, the AS returns the approved information in the \"subject\" response field. The case where information is about the end-user is not addressed, i.e. there is no sentence starting with: If information about the end-user is requested (...) Does this mean that an end-user is not allowed to know which personal data is known by the AS about him ? If this would be the case, such a limitation would not comply with generally agreed privacy principles. The text is still not fixed in draft-08. What are the benefits for a client to know information about the RO ? This may be crystal clear for the editor(s), but it is unclear for a lambda reader.\nBased on recent conversations, especially URL it seems to me that the intention is to consider the subject as RO with regards to the information about them. I think this shows up clearly in since it does not involve any Protected Resources but it stills call the user RO. It might be helpful to document scenarios where there is more than one AS involved. Let's say we have Alice using a client to access resources owned by Bob. Here we could have two different AS: AS providing user information about Alice to the client - here Alice would act as both End-user and RO and Bob not be involved at all AS providing access_token to the client allowing access for protected resources - here Alice would act as End-user and Bob as RO Does that scenario with two different Authorization Servers sound reasonable?\nNAME Those two both make sense, and it makes sense even for them to be together. You could have those be the same AS as well, if Bob's resource server is protected by the same server Alice is using to log in to the client. But they're different kinds of access, and it would make sense to talk about them in a single example scenario to help call out that difference. There's also a weird case from the CIBA work in the OIDF: Alice uses client software to find out who Bob is. The canonical example given here is a call center: Alice needs to verify that the person calling her is actually Bob, and so Alice starts something with the AS that reaches out to Bob to approve the release of identity information about Bob to Alice, in real time.\nNAME how does Alice know which AS can release identity information about Bob? I would not make an assumption that one global central AS exists and it has information about the whole Earth's population?\nNAME This use case is not assuming a global AS at all (I agree, that's absurd), but it is assuming that Alice and Bob are both working with a known ecosystem. In the case of the call center example, it's the supporting company's AS -- where Alice works and where Bob is a customer. So, Bob needs to have an account there, and so does Alice. Alice's account needs to be authorized to get info about other users (because she's a support tech). Bob's account is a customer account that Alice can fetch information from anyway.\nI see, thank you. It indeed looks like another use case where Bob acts as RO but there is no RS or Protected Resource involved.\nI've encountered the CIBA use case very recently. Here's a summarised description : the enduser gets in touch (out of band) with the support, so that both are available at the same time the support needs to get read access to some info stored on the enduser's mobile, and uses CIBA to get the enduser's approval returned scopes define what can be accessed through another OIDC client (used by the support agent) there are no RS involved Note : this issue is closed, so either we should reopen it or open a new issue\nNAME Agreed, we've drifted off of the topic of the issue as well. I opened up a new issue for discussion of the cross-user CIBA use case specifically:"} {"_id":"q-en-gnap-core-protocol-c0e9ef10ba3de003166e5ae2deeaa3375169145543a4e2bd80fd332ef7dad002","text":"Deploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 180278a02620d6e193b78d72e2dae6d0aa5c5c4e Inspect the deploy log: Browse the preview:"} {"_id":"q-en-gnap-core-protocol-41b3b6ca5f3c5ae8314580335ee01c81d5d115e4c9b2d7eee04e37bab805eaf6","text":"Update all diagrams to use AASVG for rendering in HTML.\nTo edit notification comments on pull requests, go to your ._"} {"_id":"q-en-gnap-core-protocol-abab5feb85377aa5d88af10f76909cd96b7331911a1754009d852ccb72ef8139","text":"After discussion with NAME , this PR now makes keys single-format-only as previously suggested.\nTo edit notification comments on pull requests, go to your ._\nYaron: why are we sending a JWK as well as a cert? Are we checking that the cert contains the same public key as the cert? Justin: I would be OK with the different formats being mutually exclusive somehow. The AS would need to check that they’re the same and throw an error if not. But this would also let the RC send over whatever it’s got for its own keys every time.\nThe document has been changed to address this issue, copying my text that refers to the latest draft version: In Sec. 7.1 we allow the client to include multiple keys of different formats, provided they are \"equivalent\". This doesn't make sense, because the only reason to send multiple keys is that the sender suspects that the recipient doesn't understand one of the formats. But if that's the case, the recipient is not able to validate the MUST requirement that the two formats be equivalent. This could also lead to interesting key injection attacks. IMO we should only allow one key value/format for each request."} {"_id":"q-en-gnap-core-protocol-9040bdf0e39eadb770f1e8bfea2510ee83ad2c419d5e630863d3df74f3b40a7a","text":"Adds the field for requesting information for a specific user, separate from identifying the current end-user. Also adds a security considerations section addressing asynchronous authorization methods.\nTo edit notification comments on pull requests, go to your .\nNormally we think about releasing subject information being about the current end-user, but that's not necessarily universal. The use case from CIBA: Alice is a support engineer with a company Bob is a customer of that company Bob calls the company and talks to Alice Alice needs to authenticate, from her system, that Bob is the person tied to the account in question Alice's client messages the AS to request Bob's information (probably including an identifier for Bob in the request) The AS reaches out to Bob to approve Alice's client to get Bob's information Bob approves Alice's request (probably on a pre-enrolled device) The AS sends Bob's information back to Alice's client Note a few key assumptions with this: At no point does Alice's client think that Bob is logging in. Alice and Bob both need accounts of some type at the AS Alice's account (or client) need the rights to request someone else's info Bob's approval mechanism needs to be tied to his account ahead of time (since Bob isn't running the client and the AS needs to reach out to him out of band) Bob's approval needs to be clearly directed to tell Bob that it's Alice (and alice's client) asking for his info Alice might need to log in / interact with the AS as part of this, or it might be tied to the client instance (maybe Alice has logged in for the day and that association is remembered by the AS?)\nThis is an interesting use case. Yet CIBA doesn't necessarily require an AS to work, as long as one is only interested in getting scopes. Is there a generic use case for cross-user subject information + required AS? Or maybe we should explain the added value of the AS in that specific case ?\nWe will add an example in section D that shows how this would work and verify that the language around subject identifiers is sufficiently robust to encapsulate this. This might also add additional security and privacy considerations when used in this mode because multiple people are now involved explicitly.\nI think this could be marked as NEEDS TEXT ?\nI think it mostly just needs to be added as an example, and I can do that.\nFine ;-) (on my side, I always tend to consider CIBA as some degenerate phishing experiment, but maybe that's too extreme)"} {"_id":"q-en-gnap-core-protocol-6e016107e354128ee57cb310998343681c95d2d54bdf1b684b38ee890602595b","text":"Just two minor typos I stumbled across.\nBuilt To edit notification comments on pull requests, go to your ."} {"_id":"q-en-gnap-core-protocol-0ef488bcbd2be1a504c69d9f4125bcd76fcdd2a5050f56c36a2bb6448474678d","text":"clarify that user codes are unguessable, - update user code examples,\nTo edit notification comments on pull requests, go to your .\nThis is very minor, but there's a contradiction between the usercode provided in the example in section 3.3.3. and the recommendations that are mentioned in the text above it. The text says: And However, the example provided below it goes against these two recommendations, as it contains a special character (a hyphen) rather than just letters and numbers, and it's longer than eight characters in length: I'd suggest either tweaking the wording, or tweaking the example so that the usercode provided in the example is aligned to the recommendations mentioned in the text.\nThe specification does not require user codes to be unguessable. Section 3.3.3 (Display of a Short User Code) states that user codes have to be unique and should be short-lived, but this does not imply that codes should be unguessable. It seems that Section 13.27 (Exhaustion of Random Value Space) does not apply to user codes, but to values that are clearly random values \"such as nonces, tokens, and randomized URIs\". If attackers can guess user codes, the same attack described in URL is possible.\nGood catch! The intent is for these to be unguessable, so we can add that explicitly."} {"_id":"q-en-gnap-core-protocol-7b168e1af4316fd0c440ab962897674044fc5004440eef5c3d892a4b4a1db956","text":"NAME Please take a look at this updated text to see if it addresses your concerns.\nTo edit notification comments on pull requests, go to your .\nThinking about possible extensions to GNAP's interaction modes, I had the impression that the specification could be a bit clearer regarding requirements on such extensions. From reading Section 3.3, my understanding is that all implementations of GNAP, including ones using extensions which define additional interaction finish methods, MUST include a nonce in the grant response (given that the AS wants to use the finish method offered by the client instance): URL However, Section 3.3.5 sounds a bit different, only referring to the two interaction finish methods defined by GNAP core: URL Section 4.2 could be a bit clearer as to what is expected of future extensions to GNAP, it currently says: URL with the two following sections describing the finish methods defined by GNAP core. This is, however, somewhat clearer in Section 4.2.3: I.e., Section 4.2.3 implies that AS must (somehow) provide the client instance with an interaction reference and interaction finish nonce. But it may be helpful to make this more explicit throughout the relevant Sections. There seem to be no requirements for future interaction start methods. It may be helpful to document some minimal requirements, e.g., enough information to identify the grant has to be conveyed to AS via RO in the interaction start (such as the redirect URI in GNAP core).\nThanks NAME for taking the time! The proposed text looks good to me (just one possible typo, I'm not a native speaker, so I'm not 100% sure about that)."} {"_id":"q-en-gnap-resource-servers-1e4ab3b4b215b431e6f1e45e51b9731c985d865a8524e6d114386c46b70e5379","text":"Expands the discovery document and aligns it with changes made to the discovery process in core. Addresses\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: ac397efc4f491e2003a3f8059eef765b8e3075b3 :mag: Inspect the deploy log: :sunglasses: Browse the preview:"} {"_id":"q-en-gnap-resource-servers-1fff2bfb4fe76fc29023ae87204c6008bed5deb97a5b6793801b27183d28204b","text":"Expands the definition of the resource set registration protocol, used by an RS to declare a resource set to an AS at runtime and receive a reference identifier for that resource set to hand to a client.\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: f9aac0b2af163c0f9a58ba17e93db7e8956140bf :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nRegistering a Resource Handle: Editor's note: It's not an exact match here because the \"resource_handle\" returned now represents a collection of objects instead of a single one. Perhaps we should let this return a list of strings instead? Or use a different syntax than the resource request? Also, this borrows heavily from UMA 2's \"distributed authorization\" model and, like UMA, might be better suited to an extension than the core protocol"} {"_id":"q-en-gnap-resource-servers-e1de17021bc3e5ddcc89dd3d0e53075b4703818f6af8d8fc7281ffa176e74859","text":"Updates token introspection definitions into a more full protocol offered by the AS.\n:heavycheckmark: Deploy Preview for gnap-resource-servers-editors-draft ready! :hammer: Explore the source changes: 0b6505e11547ab68388ce27ffc88a8e1c9e5c16b :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nIntrospecting a Token: Editor's note: This isn't super different from the token management URIs, but the RS has no way to get that URI, and it's bound to the RS's keys instead of the RC's or token's keys.\nFrom a privacy point of view, token introspection allows an AS to know exactly when operation(s) are being performed by an end-user on a RS. This provides useful information for ASs that may be tempted to act as \"Big Brother\". For that reason, the risks related to token introspection should be advertised in the Privacy Considerations section. Its usage should be deprecated in the general case. If there exists specific cases where there is no such a risk, these cases should be advertised.\nThe section doesn't provide any information on what happens when a client wants to introspect an inactive/invalid/revoked token.\nYes, this is pretty thin right now. It's the RS that introspects the token, and this section will be pulled out into a separate spec to make that more clear (). The response for both positive and negative situations is likely to be based on the OAuth introspection spec, RFC7662 URL\nWe should add: the AS MUST validate that the token is appropriate for the RS that presented it, and return an error otherwise.\nIn OAuth token introspection, the AS doesn't return an error in this stated case -- it simply says the token presented is not active, in order to prevent information leakage to a nosy RS.\nNAME I don't understand your argument about a \"nosy RS\". The text states: As a consequence, if the AS does not recognize the signature, it can return a error stating \"bad signature\". If the signature is correct, the AS can provide all details to the recognized RS, as yaronf mentioned.\ns/error/inactive status/.\nToken Introspection: it is not clear to what depth we are defining the API: is it only the existence of the \"introspect\" endpoint? Or do we define a minimal set of standard attributes that need to be returned? The API would not be useful for interoperability unless we define some of the returned attributes. At the very least: \"active\".\nThe idea would be to define a core attribute set with extension points like RFC7662. This same attribute set could be used for the token model (but maybe not format?) described in"} {"_id":"q-en-groupcomm-bis-511749c70a1a8d9779cba5ac9ceec452c48d9c617bdb8c21e5e33cb7a320e81f","text":"more details on encoding application group name, for\nReading the paragraph after \"An application group can be named in many ways\", I was left a bit confused: Why should it be encoded in the Uri-Query? That'd need coordination among all applications running on that host (for it is interfering with the query argument space that's usually left for the application to use). The common partitioning of applications inside a host is left-to-right in the URI, and that puts it in the very last spot. I think I do see what is meant by the discouragement of using the Uri-Host, but doubt that it'd be immediately obvious to general readers. How I understand Uri-Host to be impractical here is that while the request is sent to a well-defined and named URI, the response is a representation of the same URI with the individual server's IP replaced into the authority component, and if two application groups on the same CoAP group use the same path, then there may be clashes, not on the request-response layer (where the responses are bound to request for a particular host name) but in the client's cache of the responses. (And once one is at the point where application groups in the same CoAP group have a disjunct path set, one can do away with identification altogether and just address the unique path). I don't quite follow how this is not \"as intended in RFC7252\": It is an identifier for who is being addressed, and as such just part of the URI. (For example, if I define my lights group to be named URL and name the resource , then the host name is exactly the group name, and already needs to be put into the request unless there is known URI aliasing). See also URL on \"send a Uri-Host\" option, with the caveat that I'm a strong proponent of answer A2. Furthermore, the text seems to imply that virtual hosting is something a server would need to go out of its way to implement. It isn't: The Uri-Host option is critical, and a host needs to make the conscious decision to ignore that option; otherwise it's just as much part of finding the addressed resource as all the other Uri-* options are. The best practice is something I can get behind, although I suggest using no identifier on the wire (and just the disjunct set of resources) as a byte saving alternative when the administrative effort of ensuring path uniqueness can be afforded.\nThanks; we do need to improve the current text on these aspects. On Uri-Host and your example \"URL\" where authority component includes the application group: in this group communication example the sending client would normally DNS-resolve \"URL\" to a particular multicast IP address, then send the CoAP request to this multicast IP address destination, without including any Uri-Host Option. This works as long as you have a unique 1:1 association between application-group and CoAP-group. In case multiple application-groups map to a single CoAP-group, this doesn't work anymore : a client could be e.g. configured to always include the \"Uri-Host\" Option to indicate the hostname. Still not the preferred approach I would think. Btw I can't follow answer A2 on the CoAP-Faq, unfortunately: formulation is too complex. In some (common?) constrained use cases, the DNS name would not even be configured in the client and the client would just be configured directly with the IPv6 multicast address of the CoAP group to send it to. Then, the application-group would need to be encoded in some way still, e.g. using the Uri-Path Option. Now what our original text tried to suggest is to \"misuse\" a Uri-Host Option for that. So instead of using an option like: Uri-Path = group123 You would instead use an option: Uri-Host = group123 So this is not a \"real\" URI hostname but rather a low-level hack to let the client's CoAP stack insert by force an additional option just before sending out the CoAP request. The receiver's CoAP stack would just parse it as the ID of the application-group and not really as a URI-host. Because this gets confusing and potentially messy (proven by the present discussion !) we don't recommend it. So, trying to reiterate if one has configured Group URIs of the form coap://[mcast-ip-address]/resource on the client then by default the Uri-Host Option would get elided. The above mentioned \"hack\" is to put it back to encode application-group. This has the issue that the application-group name isn't present in the Group URI. if one has configured Group URIs of the form URL on the client then normally by default the authority gets resolved to some IP multicast address before sending it out, and thus while sending it the Uri-Host Option is NOT included in the CoAP encoding but elided. So the receiving server doesn't see the application-group ID because it's not encoded in the request data. (We assume here that multiple application-groups use the same CoAP-group.) Maybe we should discuss this further; what's the effort here / who ensures it / unique between what and what?\nInitial text proposed in above PR! We'll merge it in by tomorrow evening, as an improvement (hopefully). What is not included yet is a statement saying that we don't recommend using the non-Group-URI methods of encoding application group name. To do so we would need to clearly state why that is not recommended; still not clear at this moment what the disadvantages of these are."} {"_id":"q-en-https-notif-6f242b4c603b81c241034cfef49da3d843edf88a21e030afbbe00ca8f3401008","text":"This adds an XML example. The only question is whether we need a prefix for ietf-https-notif in the example."} {"_id":"q-en-idempotency-aea0af4c40c617d78a7d14b164069f4d54809b607c49ec3cd16a36a8d0f344c5","text":"Issue\nNAME NAME pl review re issue\nLooks good\nthe examples of errors use a pattern of using a link in a header, and then (supposedly) using a human-readable error message. what about using RFC 7807 instead and using examples that use problem reports to indicate failure semantics in addition to the status code?\nI agree. coming up in the next draft.\nNAME should we close this issue if it is resolved?\nNAME closing this issue. Let me know if not addressed with my latest PR."} {"_id":"q-en-ietf-rats-wg-architecture-7ea5fdd094304769a5734adfb3b53628251e6dcf9f2c1ce2a4ef383bf04150e3","text":"URL Greetings! I got as far as I could with time constraints through the rest of the document. There are a few sections I'll likely go back over again as I was short on time when reviewing them, but do hope this is helpful towards improving the document. This review is from the editors revision on github that I started my review earlier int he week. Section 4.2: I find the following grouping a bit confusing: \"Places that Attesting Environments can exist include Trusted Execution Environments (TEE), embedded Secure Elements (eSE), and BIOS firmware.\" This has an attesting environment as a processor, an element, and as compiled code. That doesn't seem right, especially when you refer to the attesting environment as a \"place\". Do you mean where BIOS is executed perhaps? I would expect a TEE and maybe other components where security functions are offloaded, which is in some cases to a processor other than a TEE like a GPU (I'm not making a recommendation to do that here, just stating what is happening in ecosystems that exist). The following text is a little confusing, in particular, the last sentence: \"An execution environment may not, by default, be capable of claims collection for a given Target Environment. Attesting Environments are designed specifically with claims collection in mind.\" If the attesting environment is a place, TEEs and other processors were not necessarily designed with claims in mind. If you can help to explain an attesting environment better, I can help to phrase it more clearly to convey the intended set of points as I think there are several. Section 4.3 Consider changing from: \"By definition, the Attester role takes on the duty to create Evidence.\" To: \"By definition, the Attester role creates Evidence.\" The following sentence is a run on and I think some additional information will help provide clarity. I don't think it's the role that is composed of nest environments. Current: \"The fact that an Attester role is composed of environments that can be nested or staged adds complexity to the architectural layout of how an Attester can be composed and therefore has to conduct the Claims collection in order to create believable attestation Evidence.\" How about: \"An Attester may be one or more nested or staged environments, adding complexity to the architectural structure. The unifying component is the root of trust and the nested, staged, or chained attestations produced. The nested or chained structure should include claims, collected by the attester to aid in the assurance or believability of the attestation Evidence.\" The following sentence should be broken into two: \"This example could be extended further by, say, making the kernel become another Attesting Environment for an application as another Target Environment, resulting in a third set of Claims in the Evidence pertaining to that application.\" Proposed: \"This example could be extended further by making the kernel become another Attesting Environment for an application as another Target Environment. This results in a third set of Claims in the Evidence pertaining to that application.\" How about changing the following sentence from: \"Among these routers, there is only one main router that connects to the Verifier. \" To: \"A multi-chassis router, provides a management point and is the only one that connects to the Verifier. \" For the following sentence, trying to remove the use of a word twice in a row and convey the same intent. Old: \"After collecting the Evidence of other Attesters, this inside Verifier verifies them using Endorsements and Appraisal Policies (obtained the same way as any other Verifier), to generate Attestation Results.\" Proposed: \"After collecting the Evidence of other Attesters, this inside Verifier uses Endorsements and Appraisal Policies (obtained the same way as any other Verifier) in the verification process to generate Attestation Results.\" I may try another pass at section 4 to improve readability. Section 5, Consider breaking this into 2 sentences, from: \"This section includes some reference models, but this is not intended to be a restrictive list, and other variations may exist.\" To: \"This section includes some reference models. This is not intended to be a restrictive list, and other variations may exist.\" Section 5.1 Two result words together is not necessary. Change from: \"The second way in which the process may fail is when the resulting Result is examined by the Relying Party, and based upon the Appraisal Policy, the result does not pass the policy. \" To: \"The second way in which the process may fail is when the Result is examined by the Relying Party, and based upon the Appraisal Policy, the result does not pass the policy. \" I suggest making the last paragraph the second paragraph. It's easier to read than the others and provides an introduction to the model before you begin talking about how it might fail. ** I may come back to this section to provide wording suggestions. Section 5.2 I suggest moving the last paragraph to the first in this instance. It also provides a nice high-level overview. The current paragraph 1&2 flow well together, hence this order suggestion. The third paragraph is a really long way of stating that interoperability is required. Entities must be able to create, send, receive, and process data (hence the need for a standard). It's fine, but could be shorter. Section 5.3: OLD: \"One variation of the background-check model is where the Relying Party and the Verifier on the same machine, and so there is no need for a protocol between the two.\" Proposed: \"One variation of the background-check model is where the Relying Party and the Verifier are on the same machine, performing both functions together. In this case, there is no need for a protocol between the two.\" I think the addition is necessary or you could scratch your head wondering why no protocol is needed. If they are separate functions on the same system, you'd need some way for the relying party to access the results of the verifier. The next sentence could benefit from being made into several. OLD: \"It is also worth pointing out that the choice of model is generally up to the Relying Party, and the same device may need to create Evidence for different Relying Parties and different use cases (e.g., a network infrastructure device to gain access to the network, and then a server holding confidential data to get access to that data). As such, both models may simultaneously be in use by the same device.\" If the server holds confidential data, why does it need to create evidence to access the data? I'm not following the second example as written, so my proposed text may not be as intended. Proposed: \"It is also worth pointing out that the choice of model is generally up to the Relying Party. The same device may need to create Evidence for different Relying Parties and/or different use cases. For instance, a network infrastructure device may attest evidence to gain access to the network or a server holding confidential data may require attestations of evidence to gain access to the data. As such, both models may simultaneously be in use by the same device.\" Section 6: The first 3 sentences are fine, but if you wanted to reduce it, it could be one sentence. Current: \"An entity in the RATS architecture includes at least one of the roles defined in this document. As a result, the entity can participate as a constituent of the RATS architecture. Additionally, an entity can aggregate more than one role into itself. \" Proposed: \"An entity in the RATS architecture includes at least one [or more] of the roles defined in this document. \" Or more is redundant, so that really isn't necessary to convey the same point. Then the last sentence: I'd change it a little from: \"In essence, an entity that combines more than one role also creates and consumes the corresponding conceptual messages as defined in this document.\" To: \"In essence, an entity that combines more than one role may create and consume the corresponding conceptual messages as defined in this document.\" It might not, it could be the verifier and relying party. Section 5: Consider changing from: \"That is, it might appraise the trustworthiness of an application component, or operating system component or service, under the assumption that information provided about it by the lower-layer hypervisor or firmware is true. \" To: \"That is, it might appraise the trustworthiness of an application component, operating system component, or service under the assumption that information provided about it by the lower-layer hypervisor or firmware is true. \" Section 7: I'd say it's really a stronger level of assurance of security rather than a stronger level of security in the following: \"A stronger level of security comes when information can be vouched for by hardware or by ROM code, especially if such hardware is physically resistant to hardware tampering. \" Then with the following sentence, \"The component that is implicitly trusted is often referred to as a Root of Trust.\" I think it would be worth mentioning that a hardware RoT is immutable and mutable RoTs in software are also possible, offering different levels for an assurance of trust/security. Section 9. Since ROLIE [RFC8322] has been added to SCAP2.0, it would be good to list that as an option since a lot of time and effort went into how to secure the exchange of formatted data for that protocol. Section 11. Privacy This is a good start. Remote attestation also provides a way to profile systems as well as the user behind that system. If attestation results go higher in the stack to include containers and applications, it could reveal even more about a system or user. The scope of access needs to be emphasized, including the administrators access to data (or restrictions). If there is a way to make inferences about attestations from their processing, that should be noted as well. Section 12: This will need to state an explicit requirement for transport encryption. In the introductory paragraph, the text is as follows: \"Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy, needs to support end-to-end integrity protection and replay attack prevention, and often also needs to support additional security protections. For example, additional means of authentication, confidentiality, integrity, replay, denial of service and privacy protection are needed in many use cases. \" Proposed: \"Any solution that conveys information used for security purposes, whether such information is in the form of Evidence, Attestation Results, Endorsements, or Appraisal Policy must support the security properties of confidentiality, integrity, and availability. A conveyance protocol includes the typical transport security considerations: o end-to-end encryption, o end-to-end integrity protection, o replay attack prevention, o denial of service protection, o authentication, o authorization, o fine grained access controls, and o logging in line with current threat models and zero trust architectures.\" Best regards, Kathleen"} {"_id":"q-en-ietf-rats-wg-architecture-072a1fedee59cd1ae661236a08739bc482ea51071515732266808e8a6b9453d6","text":"This is to partially\nIn section 4.2 there is a list of \"places an attesting environment can exist\" that is limited. The text should be softened to say the list is only examples and there is no formal limitation or requirement on the security of an Attester. There should probably be a detailed description of Attester security in the document, but it should be in Security Considerations. This seems like a glaring omission from Security Considerations.\nThe considerations weren't the extent of the issue. There is still text in 4.2 that I think needs changing."} {"_id":"q-en-ietf-rats-wg-architecture-963267e5e29b79e06e3b1abd80595536fb774f3202bb9b999a5e023c54477c98","text":"Change \"Evidence Appraisal Policy\" -> \"Appraisal Policy for Evidence\" Fixes editorial nit reported to me by Tolga Acar Signed-off-by: Dave Thaler\nlooks good."} {"_id":"q-en-ietf-rats-wg-architecture-25975f0f58bab1de235a101d28f953d2f9dd0a643b751283271f01711038e46a","text":"Also, my assumption is that all messages, configs and such go over one of the arrows in the figure. There are no back channels for things that are vital to or a major part of attestation. Seems like it would be a poor document if the major parts of attestation are not explicitly carried in the flows depicted."} {"_id":"q-en-ietf-rats-wg-architecture-2c6d16f39d04cf3a45b18b8a3b11484cb97ad901d6ce52cb5759e9e0659bdbc3","text":"Merged from draft-thaler-rats-URL Signed-off-by: Dave Thaler\nPR and PR merged text that I had, and continue to have, problems with. A summary of the problems follows: 1) It includes events (CC, RP) that are not used in any checks in the text. I had pointed this out before when the PR was generated, but the PR was merged without addressing this comment. 2) The section about using timestamps with synchronized clocks was removed and replaced with a section that talks about a different mechanism (handles) that is not in any RFC or WG document. I believe that such replacement is inappropriate. If the WG adopts a document that talks about a handle distribution mechanism, then a section can be added. Or if there is a standard from another org that can be referenced then it's fine to add one in that case too. But I think it should not replace the section discussing the well known technique of synchronized clocks (e.g., using secure NTP, or PTP (IEEE 1588-2002) or whatever other standard mechanism). 3) time(HD) is problematic as phrased since all the other events are times as seen by a single entity, whereas time(HD) does not clearly say whose perspective (i.e., whose clock) it is from... is it the distributor's? Text is unclear. 4) It talks about nonces in a timestamp section, which I think causes confusion since there's already a separately section about nonces and this section is labeled to be scoped to timestamps. Addressing might address this naturally, if it uses a different section title. 5) There are some editorial typos, such as capital \"The\" in the middle of a sentence, etc. 6) I have no idea what \"delta(time(HD),time(EG))distribution-interval\" is supposed to mean. Normally I would expect an equality for any condition to be evaluated, but I have no idea what this check is. 7) time(HD) and time(EG) are according to different clocks (if my guess about point 3 above is right) and so the delta is not directly comparable. I'm not sure how point 6 accounts for this. 8) The phrase \"A nonce not predictable to an Attester (recentness & uniqueness) is sent to an Attester\" has \"(recentness & uniqueness)\" which doesn't read well (sounds like bad grammar) to me, and is not explained.\nHopefully PR now addresses this issue.\nPR reverts the text, leaving the work in progress in PR\nfixed by ."} {"_id":"q-en-ietf-rats-wg-architecture-d0c122e82b3940419986bb91f7c1170a4c54293e3ec50c9b1192e4422f7eda02","text":"This attempts to address the remaining feedback on PR on issues that were not fixed before it was merged. Signed-off-by: Dave Thaler\nThe simple word choice / grammar changes seem OK. I still don't understand the difference between a \"claims collector\" and a \"verifier\" (in the case where Attester B, C,... are signing evidence) and a \"target environment\" (in the case where Attester B, C, ... are not signing evidence). It seems this term isn't actually necessary. The case where Composite Attester is relying Evidence from \"sub\" attesters, the composite attester (aka claims collector) is relying Evidence not \"collecting claims\" in the sense that an attester \"collects claims about a target environment\". I think it is confusing to overload this terminology to also mean \"rely Evidence\"."} {"_id":"q-en-ietf-rats-wg-architecture-0fc07fe197e2f01a18d704efde1e9e8f6962168509e835ee686942e4137d04b8","text":"adding the Reference Values to fix issue 185\n` And the Verifier uses the Reference Values\nI agree that Reference Values should be mentioned here."} {"_id":"q-en-ietf-rats-wg-architecture-1fc45397fcdd39a5db09f50319023a72ce5062080292aa61f57c0830568f9b33","text":"update the \"Combinations\" section to fix issue .\nHow does this show that both models are required in this example? I suggest changing this sentence in this way. Because I think the combinations have several possibilities and there is no need to detail all possibilities. Any other thoughts?"} {"_id":"q-en-ietf-rats-wg-architecture-cccb07732ef6ef82b91203deb1e3ba8d26ba94549e2d655ef99820e6d0894088","text":"Signed-off-by: Dave Thaler\n` Isolate? From what? “Isolation” seems to be used several times in this and later sections without saying who’s isolating what from whom.\nI don't understand that sentence either. Propose removing it. Another sentence: Propose removing \"isolation and\" Other uses of isolation I think are fine, where it means that an Attesting Environment cannot be modified by a Target Environment it measures. Any way of protecting that might be called \"isolating\" the two environments from each other in some way."} {"_id":"q-en-ietf-rats-wg-architecture-1dc1b1d34b66c29a012b87e0a018d7137c2244fd0da361ee55c114dd780fcc2b","text":"` 1) Is this a normative statement? 2) I think the next paragraph says that maybe it’s not an asymmetric key, and anyways, keys are out of scope for RATS Arch\nIn a layered device architecture the previous layer \"provisions\" the next layer. Hence, it isn't possible for the 'manufacturer' entity to do that. I don't know why a section on \"Verifier\" is going into so much detail about the Attester."} {"_id":"q-en-ietf-rats-wg-architecture-d03e0eea170856cd74db46d15f1762751b527022f221ba56cf3630a8ccb66fad","text":"` Does this mean to imply that bogus policy can’t be installed by sneakernet, e.g., on a usb thumb drive??\nI don't think the 'via a network protocol' is necessary. The consideration is simply that policies have security considerations that are out of scope of the architecture."} {"_id":"q-en-ietf-rats-wg-architecture-4560e78fd87d0dfa46462bb86529df8f7452ac7f67c9c150d63bb0f9f4ead188","text":"` The loader environment only exists for a second or two, and has few resources… how it is supposed to report evidence to a verifier in that window?\nRefers to figure 3, in section 3.4, on layered attestation.\nIt is rare that early layers generate or convey Evidence. In critical system with appropriate protected capabilities that actually happens. I can see that it is not a common case and would not argue strongly against removing 'B' from that conveyance arc.\nPerhaps a better example for which the loader environment is not ephemeral would be where B is the OS and C is an application/workload process."} {"_id":"q-en-ietf-rats-wg-architecture-4e9a2517b97b09a01f779c7078356b66c54b947cde9d790190bb63a7b4316ab9","text":"Changes \"read-only BIOS\" to \"ROM\" to align with the diagram\nI think the text “the read-only BIOS in this example, has to ensure the integrity of the bootloader” is wrong since the example no longer has a “read-only BIOS” component. It should say “ROM” to be consistent. If ROM is determined to be ambiguous as a Root of Trust and we decide to changed it to \"RoT\" then this sentence would be updated accordingly.\nNAME do you want to fix this still?\nI'll add it to my list."} {"_id":"q-en-ietf-rats-wg-architecture-80c2b366fb4c571071870aec2b6042eb1aaab27f57bc7c07596822f56555a86f","text":"attempt to provide forward reference as requested on Monday\nThis looks good to me, although I wonder whether the sentence would make more sense at the end of the \"Implementation Considerations\" section (a couple lines earlier in the document). I am ok either way.\nWorks for me. That would put that into section 1.\n\"artifacts are defined by which roles produce them or consume them\" - This sentence structure seems awkward. Is \"artifacts are defined by the roles that produce or consume them\" better?LGTM"} {"_id":"q-en-ietf-rats-wg-architecture-eb5f0f144c2fe5207e7508449cf984389701b15595f89dbcc839d9835df9d4e3","text":"Previously the text was specific to policy for Evidence, but the same applies to policy for appraising Attestation Results, so generalizing the text. This also removes a redundant sentence where older text and newly merged text from this morning said the same thing. And the diffs look larger than they really are, since some text was just moved around to be in a different order. Signed-off-by: Dave Thaler"} {"_id":"q-en-ietf-rats-wg-architecture-13bfb7a95016743de858e0368d690afb2de80d2956a2ed4269b1668b42ba9068","text":"only the most essential items: added three more timestamp types based on TUDA, trying to generalize it a bit added a few periods to text that seems to express sentences"} {"_id":"q-en-ietf-wg-privacypass-base-drafts-baa18be73c82f98e57e95f710574324c2b6f4c880701ad5368c5d8dbdfd0e7da","text":"cc NAME\nThere should be documentation that addresses the risks of the centralization in the current Privacy Pass architecture with potential solutions. This was raised in the previous WG meeting: URL This documentation should either be incorporated into the architecture document as a section when discussing the shape of the ecosystem, and/or security considerations. There is also the option of declaring a separate document for addressing this.\nA draft has been put forward for this by Mark McFadden: URL, and will be discussed at IETF 110. Will leave this open for now, until this draft (or some form of it) is accepted."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-5ec83ff5c40f2361ee1c727fbf63090dba4973abfcfd39f3bd7858f57859c324","text":"Supercede's .\nSection 6 states: But there's just one subsection. :smiley: More importantly, it's not clear that the Clients are the malicious ones in the examples provided for token exhaustion.\nResolved in by removing the section header.\nThe security considerations section starts: But only one item, Token Exhaustion, is listed. Should there be more items listed here? Or should the section be collapsed to not have a subsection?\nResolved in by collapsing."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-505a90afd2bd32e739ec770c5cf87b1ac488b303912f015ff2f215556b5d89ad","text":"[x] 1. The Reference PPEXT is not cited in the document. seems it could be sited in the into PR [x] 2. I think it would be good to explicitly say that the HTTP authentication protocol is targeted at redemption and the issuer protocol is targeted at issuance. PR [ ] 3. Last paragraph in section 3.2.1 mentions Origin Allow Lists. These allow lists are not specifically mentioned earlier in the document. Can you clarify this. [ ] 4. In section 4.3 and 4.4 is the attester involved in the TokeRequest and TokenResponse messages? Is the attester proxying the communication? It would be good to clarify this in the text. [ ] 5. Section 6 has an incomplete reference. [x] 6. It seems that the Reference to RFC 8446 is not needed. PR [ ] 7. The link - URL does not work perhaps this is the updated one? - URL"} {"_id":"q-en-ietf-wg-privacypass-base-drafts-73341c9cc4c21228aad4e8143494aa598b0833fe4b8265625e3138e65239326c","text":"it's not crystal-clear what the interaction model for this auth scheme is. In many auth schemes, the credential is repeated on each request, to keep the protocol stateless, and to prove the client's identity. However, that doesn't seem to be desireable here -- not only would it make all of those responses uncacheable, it would (AIUI) 'spend' a lot of private tokens. OTOH, the 'realm' parameter is explicitly supported (albiet without much detail), which implies the opposite. I think the preferred interaction model is that a token is only spent upon an explicit challenge from the server, but it was difficult to discern this from the documentation. Either way, it'd be good to clarify this.\nCorrect, the client shouldn't spend a token multiple times, and shouldn't keep including the credential on future requests. We could drop the realm text, but I think allowing it as a MAY does leave open use cases that might want to segment up different use cases of tokens.\nYeah, unless there's a reason to omitting it, I think keeping the MAY for the realm is fine. I tried to get a sense for how clients are intended to use other auth schemes from RFC 9110, but it wasn't clear to me. NAME do you know of other auth scheme specifications that make the interaction model more clear?"} {"_id":"q-en-ietf-wg-privacypass-base-drafts-afd1154174b1f4662c37f73fbd4e248ed5fba500c65ef20f53eb7daadb01cc66","text":"There are two cases where the term seems to be used with importance. I suspect that the Internet isn't actually that important in the context / scope of this protocol document, especially since the architecture document doesn't mention it.\nYep, good catch. It's not.\nIs there a more polite way to phrase this disclaimer?\nThis follows RFC 2119 (URL) and RFC 8174 linked from terminology where these are reserved phrases with specific meaning in IETF specifications and documents. Though technically these should probably be \"SHOULD NOT\" and \"SHALL NOT\" to be in line with the RFC terms.\nWhat requirements are being defined in the quoted text though? Is it simply a statement of fact like \"this document doesn't cover architectural stuff\" or is it \"implementations of the issuance protocol SHOULD account for considerations related to operations and privacy described in the architecture document\"?\nNAME this is just a statement of fact -- we don't need to shout it =)"} {"_id":"q-en-ietf-wg-privacypass-base-drafts-ea646b7d6cba8a05fc7b2e836a83fe3f140a7aa75a6d849e1dc35f2ced5f917c","text":"In this image: ! It isn't very clear where messages originate and who gets to see them. In particular, it isn't clear where the TokenRequest originates (the Client) and whether the Attester gets to see it (¯(°_o)/¯).\nThat's a fair point. It should be clear that the TokenRequest comes from the Attester. I wonder if we also somehow need to illustrate that there's a trust relationship between the Attester and Issuer here. Something to at least imply that the Attester is either implicitly trusted to do attestation, or, perhaps in future versions of Privacy Pass, attestation proofs are sent directly to the Issuer.\nMove diagram to section 3 (architecture). Also move much of the second intro paragraph down. Add more text about the actual steps to section 3, such as a numbered list\nIn here, introduce the basic privacy premise for how the redemption and issuance contexts are separated\nThe architecture references [AUTHSCHEME] for key concepts far more than I expected. For instance: Or: Can the architecture define the messages in their own right, then point to [AUTHSCHEME] for example implementations?\nIs the suggestion here to describe -- conceptually -- what Token and TokenChallenges are in the architecture document? If so, that seems quite reasonable.\nYeah. An architecture is most useful if you don't have to dig into the details to understand it and those references are somewhat critical to comprehension."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-6ba56cdaafec308d4428c327bdc651e9ed80c8cfd87721362e23b6ddea2929a2","text":"Is really true? A challenge is just a declaration from the Origin about what the Client should offer. It exerts no control. I would have thought that the arrangement is thus: The Origin determines what sorts of token are acceptable, including whether a token is necessary. This might incorporate any contextual information and so could change between different Clients or over time. A valid token includes information that an Origin can use to validate it against its policies. This includes all the stuff in the list that follows the quoted text (issuance protocol, issuer, redemption context, origin). Importantly, it does not carry information that directly identifies the Client to any entity other than the Origin; this being the primary privacy benefit to Clients. If the Origin determines that a token is necessary it can issue a challenge to the Client. This challenge might imply that a token that meets these constraints would be acceptable, but see the first point (i.e., the Origin can change its mind). There should be enough information in the challenge for the Client to obtain a token under reasonable assumptions (the Client understands the request and meets the requirements for token issuance). Clients can pre-emptively request token issuance. This might be based on a previous indication that this might work. (I see some caching in the trust tokens API.)\nI'm probably missing the confusion that led to this issue. The arrangement you describe is true, and in that arrangement the only tokens that will be accepted for an Origin are those which verify with a valid challenge. That means the challenge determines how the client produces a valid token. For example, the challenge indicates the token type and therefore the issuance protocol required to produce a token. It also indicates a redemption context and therefore whether or not clients can preemptively request and cache tokens to amortize the cost of issuance. I don't think a challenge is just a declaration of what the Client should offer. To me, challenges fundamentally determine the set of possible tokens that clients can produce, and that implies some level of control. If \"control\" is somehow problematic or confusing, perhaps we just phrase this differently. That might be \"The challenge determines the set of tokens the Origin will accept for the given resource. This is because tokens cryptographically bind the challenge to a client-chosen nonce.\"\nAs far as I can tell, yes, challenges do constrain what a client can produce in that in order to produce some tokens you need a challenge. However, the text here might be read to imply that it constrains the server in terms of what it might accept in the future. So yeah, using \"control\" is problematic as it too directly implies a link between the signal and the server actions that follow. I would have said that maybe \"A challenge provides the client with the information necessary to obtain tokens that the server might subsequently accept (in this context).\"\nThat works for me! I'll send a PR."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-167fd56233a705a3a1d319b7a4737cb84b21d9ecd370f95bb78e5215f9d2adb7","text":"After talking with NAME and NAME it seems like we could benefit from more clarity around attestation. In particular, we could improve clarity by providing a mapping between the terminology used in the architecture document and the terminology used in the RATS working group (which already does attestation stuff). Moreover, we could provide more examples of how attestation might actually work in practice. Currently, the diagrams just have this opaque \"Attest\" message that's sent in the issuance protocol, but expanding on that with details and an example or two could help align everyone's mental model."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-099f3e231b502c3a28acf97a60f5e76a11912d31d842d9554193b516b1d58db3","text":"The issuance overview step 3 says that the Client and Attester interact to do whatever is necessary to convince the Attester to attest. But there doesn't seem to be any pretext for this process to be initiated. Step 4 says that the client requests issuance (via the Attester), which would provide the necessary cause to start the attesting process. Should these steps be swapped?\nEither the attester is an intermediary to the flow (which is what we're doing in practice), or it is a separate flow. So the prerequisite is that the client knows it wants to request a token from an issuer, and then it does attestation.\nPerhaps this could be made more clear if we say that the client, upon deciding it wants to invoke the issuance protocol, then begins the deployment-specific attestation process with its trusted attester? I can suggest some text for this."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-018e45d4707b11774a4aaaec3168015a070b9b50ad99025e2946a6e7ac56cdf5","text":"I think that this definition is incorrect. The latest sentence is from the definition of Origin-Client unlinkability:\ngood catch. Copy-pasta error."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-1e171af33799e07c18cb62790841d2b1111934edd3c0be468a2468c9c3ac4711","text":"NAME good suggestions! I merged them all.\nThis system extensively uses tokens that are only loosely bound to usage contexts. Possession of a token is pretty much all that is necessary. That means that tokens might be stolen or hoarded and then replayed. The architecture draft says nothing about this as a risk, but it probably should.\nAgree, especially for for cached tokens. I thought we had mentioned this somewhere, but if not, let's make sure it's included."} {"_id":"q-en-ietf-wg-privacypass-base-drafts-30e03405be5111bcdff2347a951894821ce07ad2bb630b3d83e8d1d701a40a8e","text":"In a successful issuance exchange, the attester provides the issuer with information about the client. The text in Section 3.2.1 points out that the way in which information about the client is combined could reduce anonymity set size in ways that affect privacy. Given that we have a chain of custody, I imagine that the privacy property here is about what the issuer learns about a client. More information means more information that might then propagate from the issue onward to the origin. This means that the attester needs to limit what it passes along to only that which is relevant. The client needs to trust the attester to do this. ... or does it? The text in Section 3.2.1 implies strongly that this anonymity set size is important, but the constraints on the client-issuer interaction, especially consistency rules, very much limits what the issuer can pass along to the origin. So maybe the system depends less on trusting the attester and more on the combination of the cryptographic properties of the token and the consistency system that mitigates against misuse. None of this appears in the privacy considerations. Should it?\nOn this point: \"Clients do not explicitly trust Issuers.\" (Section 3.2.2) This would invalidate much of the discussion of anonymity sets in attestations. This is quite confusing. The client doesn't need to trust the issuer, because the issuer is unable to pass information on to others. However, the client cannot trust the issuer with information, so it has to trust the attester not to pass stuff along AND it has to use an anonymizing proxy to interact with the issuer so that nothing leaks. This is quite contradictory overall.\nThere are issuance protocols wherein the issuer learns information about the origin by design, like the . In this variant, if the attester were to pass information about the client on to the issuer, then the issuer would be able to reconstruct the (client, origin) pair, breaking the unlinkability goals. I agree that in the case where the issuer learns nothing about the origin in the protocol then reliance on the attester is lessened, but we need to make sure we accommodate future issuance protocols here. I'll think about ways in which we can try to make this more clear. This might go into the privacy considerations, as suggested."} {"_id":"q-en-ip-handling-5a1ee329b6b9ebb2c793101be24476e308317ac1a037d1050036d3c3d0702e6e","text":"8445 8174 Future documents, not future versions Fix normative refs\nSure - happy to add text of this sort if we think it would be valuable. OK. :-) OK.\nLGTM"} {"_id":"q-en-jsep-686684ebe8238c052ff9662246224c1346cbcb0235445f08920603451abbd6a3","text":"and\nTo resolve WGLC comments: URL it was agreed to incorporate a reference to this text in 8843bis: URL That text ended up in s7.6.\nWhen draft-8843bis is given an RFC number, this draft should reference the new RFC. Hi, Eventhough I would not like to make more changes than necessary, I am fine with \"3PCC Considerations\". However, your suggested text is very difficult to understand in some places, so let me give it a try. (The first paragraph is generic, the second SIP specific, and the third BUNDLE specific.) 3PCC Considerations In some 3PCC scenarios a new session will be established between an endpoint that is currently part of an ongoing session and an endpoint that is currently not part of an ongoing session. The endpoint that is part of a session will generate a subsequent offer that will be forwarded to the other endpoint by a 3PCC controller. The endpoint that is not part of a session will process the offer as an initial offer. The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If the UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA). If the UA is not part of an ongoing session, it will process the offer as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers, and it cannot be assumed that a UA is able to correctly process a subsequent offer as an initial offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer, following the procedures in (Section 7.2), before it forwards the offer to a UA. In the rewritten offer the 3PCC controller will set the port value to zero (and include an SDP 'bundle-only' attribute) for each \"m=\" section within the BUNDLE group, excluding the offerer-tagged \"m=\" section. __ From: Roman Shpount Sent: Tuesday, November 2, 2021 6:33 PM To: Christer Holmberg Cc: Justin Uberti ; Justin Uberti ; RTCWeb IETF Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL How about we replace the SIP Considerations with: 3PCC Considerations In some 3PCC scenarios, an offer generated during an ongoing session, i.e., a subsequent offer, will be used by a 3PCC controller to establish a new session and forwarded as an initial offer to another endpoint that is currently not part of a session. For example, the Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. If UAS is a part of an ongoing session, it will include a subsequent offer in the 200 OK response. The offer will be received by a 3PCC controller (UAC) and then forwarded to another User Agent (UA) as an initial offer. When the BUNDLE mechanism is used, an initial BUNDLE offer is constructed using different rules than subsequent BUNDLE offers. It cannot be assumed that a subsequent offer is a valid initial offer and that the endpoint that expects an initial offer will properly process such a subsequent offer. Therefore, the 3PCC controller SHOULD rewrite the subsequent offer into a valid initial offer before it is used to establish a new session. To make the subsequent offer a valid initial offer, 3PCC will need to modify all the non-tagged m= lines to include the bundle-only attribute and set the m= line port to zero. _ Roman Shpount Christer Holmberg > wrote: Hi, What about something like this: OLD: “The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP Offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. >From a BUNDLE perspective, such SDP Offer SHOULD be generated using the procedures defined in Section 7.2.” NEW: “The Session Initiation Protocol (SIP) [RFC3261] allows a User Agent Client (UAC) to send a re-INVITE request without an SDP body (sometimes referred to as an empty re-INVITE). In such cases, the User Agent Server (UAS) will include an SDP offer in the associated 200 OK response. This is typically used for 3rd Party Call Control (3PCC) scenarios. In some 3PCC scenarios the UAS will be part of an ongoing session, and will therefore include a subsequent offer in the 200 OK responses. The offer will be received by a 3PCC controller (UAC) and then forwarded as an initial offer to another User Agent (UA) that is currently not part of a session. When the BUNDLE mechanism is used, as an initial BUNDLE offer look different than a subsequent BUNDLE offer, it cannot be assumed that a UA that expects an initial offer will be able to properly process a subsequent offer. Therefore, the 3PCC controller needs to act as a Back-To-Back User Agent (B2BUA), and when it receives the subsequent offer it needs to rewrite it into an initial offer before it is forwarded to such UA.” Regards, Christer From: Roman Shpount > Sent: tiistai 2. marraskuuta 2021 10.41 To: Justin Uberti > Cc: Christer Holmberg >; Justin Uberti >; RTCWeb IETF > Subject: Re: [rtcweb] Working Group Last Call for draft-uberti-rtcweb-rfc8829bis-URL The PROBLEM is that we have two endpoints, where one sends a subsequent offer, and the other one expects an initial offer. What do you normally do when you have that kind of problem? You use an SBC/B2BUA. In this case that SBC/B2BUA would be the 3PCC controller. So, my suggestion would be to remove the SHOULD text from 8843bis, and simply add a note somewhere (in 8843bis and/or 8829bis) which describes the issue and says that the 3GPP controller needs to modify the offer accordingly. Roman, thoughts on this? If the 3PCC is going to rewrite the offer SDP anyway then maybe adding a=bundle-only isn't the end of the world. I am not opposed to this idea. 3PCC typically knows that the subsequent offer is going to be used as initial, and should be able to rewrite the offer to make it valid. We can change SIP Considerations section in 8843bis (URL), remove the SHOULD, and specify that 3PCC controller should fix the offer. We can then reference this note from 8829bis or restate the same guidance. Roman Shpount\nNot seeing any objections for a month then once -02 is submitted we will progress this towards our AD. spt"} {"_id":"q-en-jsep-7f32b7ca8ba9fb44a7c221a0a2b1302ad0060906d23bbc8a1d795178c816f8ec","text":"URL of draft-nandakumar. This is a merge of a previously approved PR\nlgtm\nLGTM\nAs per discussion in mmusic, to make it abundantly clear that DTLS is to be used even when running over ICE-TCP, the profile name should be TCP/DTLS/RTP/SAVPF. We'll leave the UDP/TLS/RTP/SAVPF one alone, since that ship has sailed, and it's less ambiguous.\nNeed Suhas to update URL to reflect the above.\nFor SCTP, URL has the correct UDP/DTLS/SCTP and TCP/DTLS/SCTP things that we want to refer to.\nThe action item here is to update this document to use the correct fields as agreed above as if draft-nandakumar was done, and then when that document is fixed we will be good."} {"_id":"q-en-jsep-cc8b4a3d2c99ac6c316dca2eb8245f917b3853820f40c4a45bc92a764a7d041b","text":"LGTM\nhttp://rtcweb-URL mentions its use at media level; this should be clarified."} {"_id":"q-en-jsep-c8a3805a92d1bd6e28395be69128521310dea8da663f859fc879313e81f5b3c9","text":"LGTM\nOverall looks good - added some minor suggestions\nlgtm\nF. Section 4.1.1: Regardless of policy, the application will always try to negotiate BUNDLE onto a single transport, and will offer a single BUNDLE group across all media sections. I think this sentence may be a bit misleading without additional sentence(s) to explain that the actual number of established transport streams will depend on the peer's response to the attempt to bundle.\nOK. This could be resolved by adding a final sentence of the form \"Of course, the answerer can choose whether or not to accept BUNDLE\"."} {"_id":"q-en-jsep-fc0609ef2534216d66f6906c1a6ac2a2aef7fa67229421e952f4b553a6b00185","text":"b=TIAS is specified in bits per second, b=AS in kbps. Hence the formula needs to multiply by 1000 to change units. Thanks NAME for catching this in code review.\nEven though I noticed it in the code review I had not noticed it as an issue in the spec. Good catch!\nnice catch - lgtm with minor comment\nLGTM\nLGTM to me as well. I updated the text in"} {"_id":"q-en-jsep-413af353d909fafa04cb3f31b474d5aeaedc76ca15e2ea6314274d30b8db71cf","text":"LGTM\nWFM. NAME PTAL\nThe current draft suggests moving the discussion of setRemoteDescription: [[OPEN ISSUE: move discussion of setRemoteDescription to the subsequent-offer section.]]"} {"_id":"q-en-jsep-b1449e0d22a48f677c8df366a2db4db4d30886f27d1416a4aff69295542a666f","text":"NAME removed the dangerous text.\nStil LGTM\nConsider the following sequence of actions: setRemoteDescription(offerwithicerestart); addIceCandidate(candidate); setLocalDescription(answerwithicerestart); As issue describes, when the remote peer has set an offer with an ICE restart, but has not yet set an answer, it's doing ICE processing for both the new and old \"generations\" simultaneously (assuming we go that route). So, is this a candidate for the new or old generation? If \"old\", then it can be applied immediately. Otherwise, it should be queued and only applied after setting the local description, since the local description is what triggers the ICE restart. Of course, there's a separate issue, which is that there's no way of telling which \"generation\" the candidate applies to. One proposed way of fixing this is to trickle a ufrag/password along with the candidate. Another way would be to queue the candidates from the \"old generation\" before signaling them, and only signal them after rolling back the local description. If instead of rolling back the local description, an answer is applied, these queued old candidates would be discarded. This way, once a candidate is signaled, it unambiguously applies to only the newest generation. Note that the \"candidate\" could also be an end-of-candidates indication, and the same reasoning applies.\nHonghai already has considered how to associate candidates with ufrag/pwds. However, I think in the absence of such markings, the candidate should be associated with the new generation, since the offer specified ICE restart and the candidate followed the offer.\nI agree. Thus, if an endpoint gathers a candidate for an old generation (and doesn't have a mechanism to associate it with a ufrag/pwd), it should only signal the candidate if and when the local description that triggered the ICE restart gets rolled back. That's what I was trying to explain in the second-to-last paragraph.\nIn ICE, we agreed to have a ufrag parameter on the candidate. Where that parameter goes is still TBD; it could be in a=candidate or on RTCIceCandidate. But this makes it clear what the JSEP implementation should do in this case.\nThe ufrag parameter would need to apply to a=end-of-candidates as well.\nThis is good news. But now we need to go back to the drawing board with trickling end-of-candidates. If the ufrag needs to be trickled, we can't just trickle a \"null\" candidate. And we can't trickle a single special candidate with a ufrag, because different media descriptions could have different ufrags.\n:-/ Perhaps we could generate individual end-of-candidates objects, followed by a null for backcompat. The null would serve as an implicit end-of-candidates for the current ufrag, but new apps would never use the null.\nI think I lean towards having the trickle candidates strongly tied to a ufrag/pwd.\nTo provide an update on this issue: I think we still want to clarify that if there's a pending ICE restart initiated by the remote peer, any remote candidates from the new \"ICE negotiation session\" should be queued and only processed by the local ICE agent once the stable signaling state is reached. I don't think this can go in the trickle ICE spec, because trickle ICE has no concept of a pending ICE restart (it's a consequence of having the ability to roll back an ICE restart offer). Also, last I heard the decision was to trickle the ufrag and mid along with end-of-candidates indications. So the second paragraph of the section will need to be changed to reflect this; it's no longer one indication \"for all media descriptions in the last remote description\".\nI've dealt with the second paragraph in PR . The first seems mostly like a \"can't happen\" to me. As an offerer, you should not be receiving candidates before the answer (that indicates a signaling channel reordering issue). I suppose in principle as an answerer you could receive ice candidates while thinking about the offer, but I think that the answer there is you just discard them.\nSee also"} {"_id":"q-en-jsep-b0cfefec767adfb583d00a2fbba92a75cdd98e05658d3d17c38cc6e362f25d41","text":"NAME\nNAME PTAL\nLG with nits\npara break and typo comment were missed\nIn the current draft we have addTransceiver [TODO] We need this text before WGLC, so it's being added to the punchlist."} {"_id":"q-en-jsep-64290478acaca974874abf138e11baa554736ed2c0360faf8b1fc906e423872c","text":"Talk about what to do with header extensions in a remote description Expand discussion of how to handle payload types in a remote description Fail things we don’t know about in an answer ,\nLGTM\nNAME PTAL at my responses\nURL If any indicated payload type is unknown, it MUST be ignored. [TODO: should fail on answers] (q: does this logic need to also include processing for other lines - extmap, rtcp-fb, etc - that weren't present in the offer?)\nFixed by\nURL 2) * For each specified payload type that is also supported by the local implementation, establish a mapping between the payload type ID and the actual media format. [TODO - Justin to add more to explain mapping.]\nRelevant to as well\nFixed by\nURL 1) If the media section proto value indicates use of RTP: [TODO: header extensions]\nLGTM"} {"_id":"q-en-jsep-df9f4f15bb2fc71bf62ec72bec83c4364a152d886d3509c3a7189e97aa231a3b","text":"This is OK with me\nS 5.2.2 says: If the initial offer was applied using setLocalDescription, but an answer from the remote side has not yet been applied, meaning the PeerConnection is still in the \"local-offer\" state, an offer is generated by following the steps in the \"stable\" state above, along with these exceptions: o The \"s=\" and \"t=\" lines MUST stay the same. o Each \"a=mid\" line MUST stay the same. o Each \"a=ice-ufrag\" and \"a=ice-pwd\" line MUST stay the same. However, Section 5.2.3.4 says: If the \"IceRestart\" constraint is specified, with a value of \"true\", the offer MUST indicate an ICE restart by generating new ICE ufrag and pwd attributes, as specified in RFC5245, Section 9.1.1.1. If this constraint is specified on an initial offer, it has no effect (since a new ICE ufrag and pwd are already generated).\nYes, if this is an ICE restart, then ufrag/pwd need to change.\nDiscussed in DC: restart requires a change of credentials. Might want to add text that notes the dangers in doing this (i.e., don't do this)."} {"_id":"q-en-jsep-cef7d845e19b3e7b3b4434af8f0790bbd56bcde755a2f1cbf71aa523dab3c350","text":"Provisional answers, on the other hand, do no such deallocation results as a result, multiple dissimilar provisional answers can be received and applied during call setup.\nFixe by"} {"_id":"q-en-jsep-83e48c0f351ac09a47037cf825e147c517b873a94d68c657653ffa418df191f8","text":"Fixes issue .\nSection 4.2.3: The setDirection method sets the direction of a transceiver, which affects the direction attribute of the associated m= section on future calls to createOffer and createAnswer. [BA] \"sets the direction of a transceiver\" is a bit confusing because setDirection() doesn't immediately change whether an RtpSender can send or an RtpReceiver can receive. Since setDirection does immediately change the value of the transceiver.direction attribute you might say \"sets the transceiver.direction attribute\" instead. Also, an application may want to determine the current direction (e.g. whether an RtpSender can send or an RtpReceiver can receive) as opposed to the pending direction (provided by transceiver.direction). Taylor agreed to create a WebRTC 1.0 PR to add the concept of current direction (e.g. by adding a currentDirection attribute). It would therefore be useful to add some text to Section 4.2.3 explaining the distinction. For example: \"Note that while setDirection sets the transceiver.direction attribute immediately, this pending direction does not immediately affect whether an RtpSender will send or RtpReceiver will receive. After calls to setLocal/setRemoteDescription, the direction currently in effect is represented by transceiver.currentDirection (the attribute Taylor will be defining).\"\nAs for naming: which of these makes the most sense? \"direction\"/\"currentDirection\" \"pendingDirection\"/\"currentDirection\" \"preferredDirection\"/\"currentDirection\" \"preferredDirection\"/\"negotiatedDirection\" I like \"preferredDirection\" (or something of that nature); \"pending\" is a bit of a misnomer here because it's possible the set direction may never end up negotiated.\nNAME My personal preference would be \"direction\"/\"currentDirection\". While \"pendingDirection\" might be more accurate than \"direction\", I'm a bit scared of all the spec changes that might be result from changing the attribute name.\nAgree with NAME\nLGTM (direction + currentDirection).\nNAME can this now be closed?\nI don't think Bernard's concern was addressed, which is that \"sets the direction of a transceiver\" is too ambiguous. Now that we have a \"currentDirection\" to reference, I'll add some clarifying text like he suggested.\nResolved by ."} {"_id":"q-en-jsep-b97144590971d312b8628e3cca9e35666493156a8cf3f2ff234459797a85c9d9","text":"LGTM\nIf RTCP mux is indicated, prepare to demux RTP and RTCP from the RTP ICE component, as specified in [RFC5761], Section 5.1.1. If RTCP mux is not indicated, but was indicated in a previous description, this MUST result in an error. I note that there can be difference here between an offer and an answer if the RTCP mux indication may have changed or not between these two description.\nAgreed. If we offered rtcp-mux, but it wasn't accepted, the next offer would be without mux. It should only be an error if we accepted mux and already discarded the RTCP ICE component."} {"_id":"q-en-jsep-a801952321e0d2b7aa32af4e1d96102b720d5129e414b77e29c3dfc571422da8","text":"NAME don't love the way the bit about the pool turned out but I don't have better text. Fixed. Consensus here was this is fine. No, this seems fairly clear. URL Done. Done. Do you mean RTP here? If so, I'm feeling OK about this. Done.\nMerging and will tweak post-merge"} {"_id":"q-en-jsep-6b0f9285053b6efb3b8c361ec9ee86ae55d14423d6c7a807eb8123cb66f587d9","text":"Originally raised by Christer Section 3.2 says: \"JSEP also allows for an answer to be treated as provisional by the application. Provisional answers provide a way for an answerer to communicate initial session parameters back to the offerer, in order to allow the session to begin, while allowing a final answer to be specified later. This concept of a final answer is important to the offer/answer model; when such an answer is received, any extra resources allocated by the caller can be released, now that the exact session configuration is known.\" 1) I would like explicit text saying that this is a deviation from RFC 3264, where there is no such thing as a provisional answer. I would like to have a dedicated ³Deviations to RFC3264² (or something similar) section. 2) I would like explicit text saying that the final answer may be completely different (in terms of candidates, codecs, accepted m- line etc) from the provisional answer. 3) Related to 2), the final answer may also require NEW resources to be allocated.\nWe discussed this and we don't think text is needed for 1) or 3); the offerer doesn't release resources until the final answer. For 2), what kind of text did you have in mind?\nFew notes to cullen from talking wtih Christer Christer proposals ... adding section \"SDP Offer/Nswer Usage\" document variation we make on SAVPF stuff explain how ANSWE / PR answer are used withen paringin multiple to single offer Points to section for details ... follwo proccedures in 3264\nCullen, I think I improved this. PTAL"} {"_id":"q-en-jsep-92015ad4733dda27b12a96ec9ad1f9a3749ce67d805041457738547577bb8c04","text":"All the xref I removed are used elsewhere in spec other than the one which I added to where it is used.\nFixed as Justin suggested and ready for re review\nNAME - have a look - I think I got all the changes\nI tweaked the discussion of unknown a=; now LGTM"} {"_id":"q-en-jsep-33db5a40ceed032528a8435bffafcf5d787436d25a9783a0ebd5c3b159694393","text":"o If the m= section is not bundled into another m= section, an \"a=rtcp\" attribute line MUST be added with of the default RTCP candidate, as indicated in [RFC5761], Section 5.1.3."} {"_id":"q-en-jsep-4e55b851c8a3fde23430c947313d3b1e2db81a242ff0732ce0c033fc41c6b57b","text":"In Section 5.4, JSEP says: \" The SDP returned from createOffer or createAnswer MUST NOT be changed before passing it to setLocalDescription.\" However in Section 5.8, it says: \"First, the parsed parameters are checked to ensure that they are identical to those generated in the last call to createOffer/ createAnswer, and thus have not been altered, as discussed in Section 5.4; otherwise, processing MUST stop and an error MUST be returned.\" [BA] So is any change to URL unacceptable (e.g. must be byte for byte identical)? Or is the text trying to say that the parsed representation must be the same (e.g. \"a=\" lines appearing in a different order shouldn't matter)?\nI believe the expectation was that the description is entirely immutable. Verifying isomorphism in the face of minor modifications seems like something that is not entirely trivial, and TBH I don't quite understand why it would be needed.\nThanks. PR URL assumes that URL is immutable.\nI will update 5.8 accordingly."} {"_id":"q-en-jsep-f3811b381bfeb4282a1c39175bde0b541400dd18eaee02a0d3720393686bb0aa","text":"I fixed the grammar and added some text about \"subsequent verification\"\nAlso see URL\nWhat's the issue?\nSomeone thought we should talk about how to parse SDP with more than one identity line and handle that. I did not look, we may already have whatever we need in the draft but I just opened this so we would at least go check.\nJSEP practice is just to put in placeholders and link to the defining draft, so we should punt the actual semantics in security-arch.\nSee email to rtcweb from Stefan Håkansson subject \"Identity in the SDP\"\nSayeth Stefan: >[2] URL"} {"_id":"q-en-jsep-e1a5fe9120465a6cdf890672d1beae0ef7e211ce46ec768ea1df4632c4aefbbb","text":"Note that the use of CVO in webrtc is described in URL\nNAME feel free to rewrite however and merge. I just wanted to get the xref in.\nWe mention it in the imageattr section. This is required regardless of whether the receiver supports performing receive-side rotation (e.g., through CVO), as it significantly simplifies the matching logic.\nNAME , can you look into this? Not WGLC2 blocking.\nwill do - I seem to recall it is a 3GPP ref but will try and track down."} {"_id":"q-en-jsep-c6e87f83777ad41233596baa95d71a982f04c01bcfcc9b50ac318b2243c35605","text":"I was reading this draft and noticed a couple of minor wording issues that made the text not as clear as it could be. I don't believe the changes are controversial, or change the intention of the draft.\nThese changes look good - I agree the rewrite of the offer/answer paragraph is a nice improvement.\nLooks fine to me"} {"_id":"q-en-jsep-a9442582e171a8e22c1b333a86acda080a77637c547a515d5b87922c2c9800bc","text":"Page 62 has a bullet ending \"If this is a local description, the 'ice-lite' attribute MUST NOT be specified.\" This sentence is leftover from before SDP munging was prohibited and should be removed."} {"_id":"q-en-jsep-3116c17f33e09dddd25bed6534f7e30da4c0e972417d8f1cab60fccdbb08616b","text":"Section 3.3, paragraph 3 is an example of text that should have been removed with SDP munging. Please remove it.\nMunging remote SDP is still allowed, but I agree that we don't need to get into the mechanics of how to do it. Agree this para should be removed, and the last para reworked accordingly."} {"_id":"q-en-jsep-a02765b5eaa713fa68908a0b8f9f1d4c9fffbc3d30a648ce1a3a738a5b80b034","text":"Fixes URL\nSee comment in\nAlright, try this version\nSection 5.8 contains the sentence: \"If there is no RtpTransceiver associated with this m= section (which will only happen when applying an offer)...\" -- it is grammatically ambiguous whether this says that the absence of an RtpTranceiver can only happen with an offer, or if the presence of an RtpTranceiver can only happen with an offer. Suggest rephrasing as: \"If applying an offer and there is no RtpTransceiver associated with this m= section, find one...\"\nNAME can you take a look at this?\nI'm not sold on this approach, since it then invites the question, what should one do if this condition happens when applying an answer? My suggestion would be to still call this out explicitly, but try to avoid the ambiguity noted above by changing the parenthetical to its own sentence at the end of the paragraph, e.g. \"Note that this situation will only occur when applying an offer.\"\nI changed it to that.\nLGTM"} {"_id":"q-en-jsep-b5004de790ccaeac832fea4dfbddda76bef8e3fc40f242b645d8282987b95bf0","text":"Section 3.6.2 would ideally include a disclaimer, like we have for many other features, indicating that -- despite normative statements in this section -- the remote side might not honor imageattr because of legacy behavior, and that implementations MUST be prepared to handle such situations.\nSure. This should probably go into 3.6 or 3.6.1, since those govern sent imageattr attribs.\nLGTM"} {"_id":"q-en-jsep-df2da1ba1bb7828f1b1ba75f385f753cc8961cbdcc7d0c052017e6cb5166f50d","text":"This seems like a normative change. I'm not saying I'm against this, but what's the rationale?\nAdam's argument in , which I found to be convincing."} {"_id":"q-en-jsep-f8816161fd1cd72d933a837aae924494ce9d61f5c9874670e3bb6c100f658f78","text":"In the last paragraph of section 8 at \"programmer MUST recognize\". That is not an appropriate use of 2119. I suggest \"needs to be aware\" instead of \"MUST recognize\".\nThis seems right"} {"_id":"q-en-jsep-1d8d024ad56abfd57bb8858febcdf2250cada11198a0bfeae3b8ddeb47a6f5d7","text":"Section 1.2 second paragraph - \"This was rejected based on a feeling\". Can you write something that speaks to consensus here? \"Based on a feeling\" isn't quite right. Maybe \"using the argument\"?\nSure.\nLGTM"} {"_id":"q-en-jsep-a9b412252a071a64cbf3ab3226846dddc66719cc9ef43de77bf6971d6f6ca845","text":"Second paragraph of 3.2, last sentence: at \"these parameters or reject them\" - \"these\" and \"them\" have ambiguous antecedents. It's not clear if you are trying to point to the parameters that don't follow the earlier stated rule, or all parameters. The sentence is complex. Splitting it up and finding a way to avoid the need for the semicolon will likely make it easier to say what you mean.\nI agree that this is messy. Editorial.\nLGTMLG with comment"} {"_id":"q-en-jsep-4a1f3b51124a6e40a0fc7724ebe05a6839812372aba33da781c5e9b694f1eb62","text":"From \"Opsdir last call review of draft-ietf-rtcweb-jsep-21\" A minor observation, not even a nit, is that the text in the Security Considerations \"While formally the JSEP interface is an API, it is better to think of it is an Internet protocol, with the JS being untrustworthy from the perspective of the endpoint. \" could be more clear.\nI have no idea what to do with this. Obviously I thought it was clear when wrote it, and still do.\nThis is the only place in the document where we use \"JS\". I think it would be good to spell out \"JavaScript\", and potentially use \"application JavaScript\" in the first usage. Finally, I think we should use \"implementation\" rather than \"endpoint\" in this para.\nLGTM"} {"_id":"q-en-jsep-aec5e1277a7bbf9a1406d8ef094add136c93b286849aefae99997a4d1cbc1dc9","text":"lgtm\nI think the dummy candidates should be v4 not v6 as many things just don't do v6 but other than that looks reasonable. You don't really need to specify the address at all the the lines when you don't know the port can just look like a=rtcp:9 With no IP address or family\nNAME The IP6 vs IP4 is described in URL, so this isn't a JSEP thing. Given that this is part of the trickle spec, and only trickle endpoints will get these candidates, I don't see any problem. As to whether a=rtcp can just have the port, consider the case where RTP candidates have been gathered, but no RTCP candidates have yet been gathered. In that case, we certainly need to have the full a=rtcp:9 IP6 ::, so I think it makes sense to include it in all cases where RTCP candidates have not yet been gathered.\nI don't care much about this so fine if not change is made. I agree it is a trickle-ice draft issue. However, when I read that, it seems that it would be fine with \"a=rtcp:9\" with no address. It seems the reason we used 9 was to provide something that stuff that did not implement trickle ICE would not barf on. I have no tested but an address of \"IP6 ::\" seems like something that has high odds of stuff barfing on. Anyways, no big deal with me one way or the other. On Sep 12, 2014, at 7:49 PM, Justin Uberti EMAIL wrote:\nOnly trickle endpoints will even receive an offer with these dummy addresses and ports, so compatibility with old non-trickle stuff is not an issue. LMK if you have any further concerns, otherwise I will merge later today.\nI concur that we should merge this. , Justin Uberti EMAIL wrote:\nOn Sep 18, 2014, at 9:34 AM, Justin Uberti EMAIL wrote: ah - my bad - makes sense Merge it !"} {"_id":"q-en-jsep-9db7671d5df9932cab7a74bdf4c65c34051e3b867340ea3fb4a7698ed82a52f1","text":"Various fixes for 60.2: Removed parens from API names, removed comment. - put the term stable in quotes, removed comment. - put the term sess-id in angle brackets to match SDP spec, removed comment. - made the tls-id attribute consistent with other SDP attributes, removed comment. - removed quotes from IceRestart option, removed comment.\n\n\"IceRestart\" option / IceRestart option Justin: LGTM\ntls-id value / \"a=tls-id\" value Justin: The source RFC seems to prefer \"tls-id\" when discussing values.\nDuring the editing of the Cluster 238 docs, it was decided that it's author's choice when formatting SDP attributes. Although RFC 8842 uses the format '\"tls-id\" attribute', this document more consistently uses \"a=attribute-name\" format for SDP attributes, which is the format that the SDP spec [RFC8866] uses. I'll update this document to consistently use \"a=tls-id\"\nsess-id / (Section 5.2.1, second bullet item) Justin: LGTM\nthe stable state (1 instance) / the \"stable\" state (3 instances) >(Also, should 'the previous stable state' be 'a previous stable state' (per Section 5.7) or 'the previous \"stable\" state'?) Justin: The quoted form is preferred, including the case of \"previous stable state\".\n[Breaking up issue /60 into smaller bites] The following terms appear to be used inconsistently in this document. Please let us know which form is preferred. Justin: There seems to be a pretty clear preponderance for the non-parenthetical form, so let's keep that. W3C also does not use (). Jean: We'll remove the parens (camelCase is sufficient formatting)."} {"_id":"q-en-link-template-9f8f3ab9b436c3cab62fdcb9037f8dc8523a2fe1b3a2cfe188e5691aa8f09cd6","text":"Emerging best practice is to use for new headers, so that parsing is well-defined. Arguments for sticking with a -like syntax: Consistence with (IMO a very poor justification) Reuse of parsers / other software (it's not well-defined, tho) ???\nWe could also talk about creating a SF version of (maybe ?).\nMatching SF and headers would be appealing.\nIn the meantime (or maybe before ;) -- URL"} {"_id":"q-en-load-balancers-763e6684ca06c5282afdf0df47ce4e96cae479620332f9b8ff9ddf67b2d4e6a7","text":"and .\nIn light of , NAME concerns about the framework, and NAME declining interest in implementing, I think it's time to simply eliminate this from the document.\nThanks for all the hard work, but sorry to be the cause of it.\nFixed by\nHow does a dynamic framework survive an HA handover? It would seem to lose all the SID allocations and break all connections."} {"_id":"q-en-load-balancers-e77175e4da15472416df839aecb681c1c39a1196bb80b50d917da3f81423410d","text":"The CID format figure has been removed since draft-ietf-quic-load-balancers-08"} {"_id":"q-en-load-balancers-6ac76dfcfb731b51305ec8b7ddc25d2cddcf91cd91583b15e1e6d843f50ec3e2","text":"URL is probably not your intent, but that is what you get with this phrasing. I think that the point is that the values are not predictable to entities other than the one generating it. This appears in several places in the document.\nWell URL uses \"indistinguishable from random\" :-) But OK, I'll seek a new formulation."} {"_id":"q-en-load-balancers-9e143e249bb7e18b3bb8ae457dde427d4f5824f38ab0c9fcc05a8f936839e04e","text":"Attempt to , . Examples and test vectors not yet updated. PTAL at the description. Is the algorithm accurately described Are things clearer or not\nNAME Thanks for all the formatting suggestions. It's cleaned up a bit, although I wasn't able to get aasvg to work. PTAL\nCorrection. I got aasvg to work but it did not look good at all, probably due to user incompetence. If you would like to submit a PR after this that is prettier, I would welcome it.\nStefan Kolbl points out this problem with the 4-pass method: Indeed, there are ways to avoid ugly and error-prone bit shifting while avoiding this property. In particular, we can always use expand-left, but just padding odd-bytes with zeros to preserve byte boundaries and avoid bit-shifting.\nFixed by\nSeveral problems here: A lot of the description relies on examples rather than normative language. The split of a CID that has an odd length produces HEX sequences with an odd number of characters, which in many circumstances will be interpreted as being (left-)padded with 4 zero bits. If that happened, then bad things would occur. You really need to retain the number of bits. Step 3 passes two arguments to , when it has three. The encryption doesn't bind to the configuration that is in use; instead the high bits of the first octet are set to 0. The formatting of the operations wraps awkwardly. The operators that this uses (^ and primarily) aren't defined. The third input to seems to increment without explanation. Given the examples, I could no doubt work this all out satisfactorily, but that shouldn't be necessary.\nFixed by\nA lot.\nYes, the new text seems clear, and correct."} {"_id":"q-en-load-balancers-e4d10fdad87525bd36e53b4b9a73419a0b31df4a909afacf9f2281ddc1ed34c0","text":"Added two sections to Security Considerations (Stateless Reset Oracle and Local Configurations Only). and .\nCan you speak a little about your reasoning for the recent changes? If this attack is limited to extracting the identity of server instances for other co-hosted entities, that's probably OK, but I'm having trouble connecting the SHOULD here with the preceding text.\nSure. If an attacker has the same QUIC-LB config as the victim, then it can extract the server mapping, which defeats the whole point of the spec. Obviously, an LB must have the QUIC-LB config for servers it routes to. If I'm an attacker within that group of servers I may already see the packet headers that give away the server mapping, so there's little added linkability here (though there is a little). The moment I share configuration among multiple server pools, I'm expanding the number of entities with access to the config, for no operational benefit except to make the administrator's life easier. In the absurd limit, all of AWS has the same QUIC-LB config, and essentially everyone can extract the server mapping from any CID that goes to AWS.\nAdd a security consideration to avoid the following scenario: MyCloudProvider has a single QUIC-LB config for all its load balancers. It rotates keys periodically, etc, but everyone gets the same config. Obviously, all the attacker has to do is open an account with MyCloudProvider and it is able to recover all the server IDs. Configs ought to be restricted to load balancers serving a finite set of servers. It is possible another MyCloudProvider customer is in the pool behind that load balancer, but that's already a privileged position as already described in the draft. Obviously, this will require some wordsmithing, as the statement above isn't very precise.\nAs server clusters increase in size, the need to reallocate server identifiers becomes more acute. In one model, the configuration ID is used to indicate a stable routing configuration. Server identifiers for a given configuration ID are routed to the same server, no matter how many other instances are added or removed. In order to allow for changes in the cluster, the configuration ID is used so that old servers can be removed from consideration and new ones added. If these changes happen frequently enough, the number of bits allocated to identifying a configuration might be insufficient. Why not make the length of the identifier flexible? That might mean that you need to make the length of the length similarly configurable.\nIt was not the intent of these bits to support long-lived configurations, instead supporting key rotation, upgrades, and the like. I would much rather people overprovisioned the server ID space than using this tool, TBH. However, the only cost is limiting the theoretical size of CIDs. At the moment, we can support up to 64B, future-proofing the encodings against future versions of QUIC. I'm open to another bit for this, but how would a configurable number of CR bits work with multiple configurations? How does a config that needs 5 bits and one that needs 2 coexist, especially if the latter needs length self-encoding?\nAs we talk through the implications of mutually mistrustful servers in , I think the case for adding another bit is compelling. I'm going to remove the needs-discussion label and come up with a PR that takes another bit.\nI'm returning this to needs-discussion, as NAME points out there is a privacy tradeoff here. If the config codespace is large, it's straightforward to have each mistrustful server have its own totally unique config. On the other hand, keeping this long-lived config difference leaks the type of flow. Assuming it's routed based on SNI, it leaks the SNI of each CID, and in that sense also increases linkability.\nAlright, I've reflected on this a bit more. In general, different server entities will have different external IP addresses and/or ports, so the load balancer can distinguish the correct QUIC-LB config without resorting to the CR bits. As IP and port are visible to everyone, there is no privacy leakage. The problem occurs when mutually mistrustful servers share the same IP/port and are switched on something else. That \"something\" may be something present only in the client hello, with the classical load balancer simply using the 4tuple after that. The only thing I am aware of in practice is the SNI. If there are others, please say so in this thread. If the SNI is encrypted, the unprivileged LB envisioned in QUIC-LB does not have access to it. So we can assume this use case only applies to unencrypted SNI. Option 1: mistrustful servers share the same config. a third party will be able to extract your server mapping, but in practice it will be hard for an attacker to obtain this position on purpose. If this is the best outcome, we can stick with 2 config rotation bits. Option 2: Issue them different config rotation codepoints. So an observer can see the SNI and associate it with certain CR bits; if the client later migrates, it will still be able to associate that connection with that SNI. If this is better than Option 1, we should probably add a third config rotation bit. Other (minor) costs of having 3 config rotation bits: length self-encoding can only support 31 byte connection IDs, instead of 63 (obviously this is only relevant for hypothetical future versions of QUIC) each config can have its own set of server IDs. So there is considerably more config state at the LB. As SIDs can be up to 18B, it's 18B x (# of servers) x (# of config rotation codepoints). This memory footprint will roughly double by going from 3 to 7 codepoints. I've talked myself into Option 1 as being a mildly better situation, thus sticking with 2 CR bits.\nUpdate: the ECHO design might allow the LB privileged access to the SNI, so it might be encrypted. However, an attacker could connect to the domains at that IP and obtain the config rotation bits. So option 2 actually circumvents ESNI entirely!\nClosing -- I have received no pushback on doing nothing (leaving it at 2 bits), so I'm going to do nothing.\nThe draft doesn't address the impact of each method of connection ID generation on how servers can use stateless resets. Most of this is likely bound up in decisions stemming from . If you can guess a valid but unused connection ID, then you might be able to induce a stateless reset that could be used to kill an open connection. As the draft only includes methods that include an explicit server identifier, it is possible that as long as valid values cannot be guessed, the effect is minimal and each server instance can have its own configured stateless reset key (or a shared key from which a per-server key is derived using a KDF).\nI don't understand the attack here. A given CID will deterministically map to a specific server instance. So there is no way for another server to receive a packet with that CID and generate a stateless reset. What am I missing? There might be something here with the differing treatment of long-header vs. short-header packets, (and the option for servers to send resets on long headers), but I'll have to think about it more.\nAs to the last point, nope: even a long header with a DCID that conforms to the server's expectations (i.e. maps to a real server) will get delivered to that server, so I don't think that's an attack.\nNAME Should we talk about this issue more, or are you satisfied enough that I can close it?\nI don't see any mention of stateless reset in the draft at all. That's probably something worth addressing, even if it is to say what you have already."} {"_id":"q-en-load-balancers-a330bfa28acfba3a60a33df1a3e85e1631e7f3a09a8d0941c4efcb06408684c5","text":"The encrypted CID format includes a zero-pad field that is used to detect whether the decryption succeeded or not. I suggest merging this field with the server ID field, and test whether the decryption succeed by checking whether the server ID is valid or not. This assumes that the server ID field is sparsely populated. For example, if there are just 256 servers, in theory a 1-octed field would be sufficient; instead, we could use a 4 or 5 octet server ID field that would be sparsely populated, allowing for error detection. This would allow for unified validity detection across all supported methods: clear text: verify that the server ID is valid; obfuscated: the divider need to have the same size as the full length server ID; the modulo is the server ID; validity can be verified there. stream: decrypt and verify that the server-id is valid encrypt: decrypt and verify that the server-id is valid It would also allows for simplification of the configuration for the encrypted method, by specifying just one field instead of two.\nYes, I agree this is simpler with no cost at all. Care to do a PR?"} {"_id":"q-en-load-balancers-3856fc14cfb29c43cc2d4d05d680a4825b203c68ca92514522694a70ca5c9e56","text":"As the config gets more complicated, the C-ish pseudocode is getting more unwieldy. YANG is the standard for configuration models, so I should bite the bullet and just figure out YANG."} {"_id":"q-en-lxnm-f3fe42fb9a2f96b60dae2e4fdce8a91d748bf39b1e7fb78a12c51a9d9db4ba39","text":"RFC8299 and RFC8466 use \"inbound-bandwidth\" and \"outbound-bandwidth\" from the point of view of the customer site. L3NM also uses this terminology from the point of view of the customer's site. L2NM flips the terminology, for example \"inbound\" is from PE's perspective. To avoid confusion, it would better in both L3NM and L2NM to use explicit terminology so that the direction is obvious, e.g. \"PE-to-CE-bandwidth\" \"CE-to-PE-bandwidth\" In the description field, it would be good to say which field in RFC8299/RFC8466 this corresponds to.\npractically it is ingress-Policer and egress-Shaper, because the BW is controlled only from the PE side. in addition there is policer and shaper per CoS/Queue which should be addressed.\nI guess you meant \"input-bandwidth\" and \"output-bandwidth\". We do have the following for the L3NM: 'inbound-bandwidth': Indicates, in bits per second (bps), the inbound bandwidth of the connection (i.e., download bandwidth from the service provider to the site). 'outbound-bandwidth': Indicates, in bps, the outbound bandwidth of the connection (i.e., upload bandwidth from the site to the service provider). >L2NM flips the terminology, for example \"inbound\" is from PE's perspective. 'svc-inbound-bandwidth' indicates the inbound bandwidth of the connection (i.e., download bandwidth from the service provider to the site). 'svc-outbound-bandwidth' indicates the outbound bandwidth of the connection (i.e., upload bandwidth from the site to the service provider). Please note that we do already have the following in the common I-D: \"Indicates support for the inbound bandwidth in a VPN. That is, support for specifying the download bandwidth from the service provider network to the VPN site. Note that the L3SM uses 'input' to identify the same feature. That terminology should be deprecated in favor of the one defined in this module.\"; Which was echoed in the L3NM, e.g., \"Note that the L3SM uses 'input- -bandwidth' to refer to the same concept.\";\nHi Med In URL it says as follows, maybe it is a typo? container svc-outbound-bandwidth { if-feature \"vpn-common:outbound-bw\"; description \"From the PE perspective, the service outbound bandwidth of the connection.\"; But I would urge you to change the terminology to \"PE-to-CE-bandwidth\" /\"CE-to-PE-bandwidth\" to make it super-explicit, the current terminology has been causing endless confusion to implementers (I realise it's inherited from the service models, but changing the terminology in LXNM would cure the problem well)\nThis is a typo. It needs to be fixed. Will move this one to the list.\nJulian, please check the PR at 7e5286a\nMed, this is great, thanks! Can it be changed similarly in L3NM as well? Julian"} {"_id":"q-en-mdns-ice-candidates-2ab683cb2b2307086c0cd345931f109087041909a0672f70f6b794319a2ca26a","text":"Let's also consider data use cases here: Fallback to TURN has a severe effect on throughput in many cases.\nSure, could add a brief note on this.\nThe text here isn't normative, so I don't think we need to have a lengthy discussion of this - a few words should suffice.\nFixed by\nLGTM"} {"_id":"q-en-mdns-ice-candidates-ecbc3ba38172fe94a307679f543542f5da75d0f537cec4e6c8f57252a9c0cdb3","text":"PR for I considered referring to RFC6724, but that raises more questions than it answers it seems.\nFrom Tom Pusateri: When more than one IPv4 or more than one IPv6 address is present, it seems like it would be better to first prefer an address that is on a shared network instead of always taking the first one (which doesn’t mean anything in DNS). If you want others to use the same address you should prefer the lowest one or the highest one or something sortable instead of the first one which could be different depending on hashing in a cache.\nDiscussed and agreement was to find a simple alternate algorithm.\nClosed by PR"} {"_id":"q-en-mdns-ice-candidates-3dd2bad7b4c99f6ee54bb9e2ad630e5e7733eec5d0d131b44369ae7c3b186ab1","text":"Fixes URL\nSee URL for details\nI added a step where we ignore the candidate in that case. The term 'ignore' should probable be refined as it is used both here and in the context of URL We might want to update ignore in URL context.\nLGTM"} {"_id":"q-en-mdns-ice-candidates-1316fe585ecebe0bcb8ca9955bfe8dc0d6ceab80dd80f12f56a6696e17ebf97d","text":"PR for\nSeems all feedback is addresses?\nHost ICE candidates, ICE host candidates, or something else?"} {"_id":"q-en-mdns-ice-candidates-dfaa9b9533013da05fc248771848acfca981f98e98ffcc5f89f2f00327dd22e4","text":"Also wrap long lines for #processing section.\nComments addressed, PTAL.\nWill wait for NAME approval before merging.\nLGTM, updated PR for line wrapping.\nThe changes to ICE candidate gathering are targeted to web browsers; the changes to ICE candidate processing affect all WebRTC endpoints.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nLGTM"} {"_id":"q-en-mdns-ice-candidates-b2c68e1ab35a4e50236c0c398cb1e07f1e960766db1442ccb06c555f33a01ae6","text":"I think this is good to go, will plan to land at EOD.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nThis section is largely normative, so it might make more sense to put some parts from it immediately after Principle, namely the various adjustments to the technique that need to be considered (TURN, IPv6, stats) Things where we merely provide discussion (e.g., session monitoring) rather than normative guidance could remain in the privacy section.\nDue to their different sensitivity and lifetime.\nIf we add this indication, do we also need to clarify the use case for lite ICE agents?\nWhat did you have in mind?\n{{#gathering}} starts with \"For any host candidate gathered by an ICE agent as part of {{RFC8445}} section 5.1.1,\" it seems the place we need to revise to resolve this issue. I was thinking of allowing ICE lite implementation to skip the mDNS wrapping, since lite agents only use host candidates (8445, section 2.5). The lite implementation is however not required restricted to ICE agents that always have public IPs from my reading of that section, but rather an alternative to simplify the implementation when possible. So I am still not sure if it makes sense to say lite ICE can be exempted from wrapping; or instead indicating mDNS wrapping can be skipped if host candidates are equivalent to srflx candidates, and automatically the intended use of lite ICE satisfies this condition.\nRight, the public IPs exception probably covers this. also indicates that this is really for web browsers, which are not allowed to implement ICE Lite.\nStarted writing this up today, but realized it's not totally clear how this should work. The idea would be to use STUN to figure out if an IP is public or not, but... Should we wait to hand out the mDNS candidate until we determine if the IP is private (i.e., STUN finishes)? Should we not hand out the mDNS candidate if we get a srflx candidate that shows the IP is not private? If we hand out the mDNS candidate and then get a srflx candidate that matches the local IP, should we make that a host candidate? Once we learn an IP is not public, should we cache that for future ICE gathering sessions (i.e., not give out mDNS candidates?) The easiest thing would probably be to recommend handing out the mDNS and srflx candidates as usual, but allow implementations to skip mDNS for IPs it has learned are public. (Except for RFC1918 addresses, which would always be treated as private.)\nPartially addressed in , will probably include some additional clarifying text once it lands\nThe rest LGTM"} {"_id":"q-en-mdns-ice-candidates-3a35f318c5bb432005ae664b8584f88bac335d24d3e1e052e97c963c64f47a26","text":"On line 421: \"that this specification is designed to hide\". Could edit that as well.\nMy take is that we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings.\nDoes it obfuscate, conceal, or obscure local IP addresses? My vote is for conceal.\nConceal/concealment sgtm.\nAs noted in , we don't need to use the same term in every case, but the terms should have the same meaning. conceal and hide are fairly synonymous, but conceal and obfuscate have slightly different meanings."} {"_id":"q-en-mls-architecture-9040eab0bcfc54943e4b5278919b7b68cf5188ec4500c74e052bb93a3c105ce1","text":"Thanks both, I'll have a look today.\nLooks good ! Thanks Richard!\nThe architecture document mentions PSKs in several places, but does not discuss the security of them. Nor does it discuss the security of External Proposals and External Commits. Notably, the third paragraph of 5.1 Membership Changes says \"…the set of devices controlled by the user can only be altered by an authorized member of the group.\" The set of devices could be altered via an External Commit by a non-member. At the end of 5.2 Parallel Groups there is a vague mention of \"using the PSK mechanism to link healing properties among parallel groups.\", but no discussion of the security. Likewise Section 5.5 Recovery After State Loss talks about PSKs as the solution to a problem but the risks are not discussed. External Proposals and External Commits should be discussed in section 5.4 Access Control.\nNAME I fixed the specific problems you note in"} {"_id":"q-en-mls-architecture-5e6e2508993091f8fb45e62cf48d176c50f0b3a0b36109006a50940d58dd2215","text":"Reverts mlswg/mls-architecture This has not been merged for a reason, namely I don’t want the dependency on an unpublished draft."} {"_id":"q-en-mls-architecture-2e1c0acd34463213ee73ec340d997e766844f6683e8fe34f602d971cc602398b","text":"Particularly in multi-device scenarios (where a user has more than one device), we should recommend that clients correlate the set of devices of each user across groups. This would make it much harder for a malicious AS to issue fake device credentials for a particular user because clients would expect the credential to appear in all groups of which the user is a member. If a device credential does not appear in all groups after a while, clients have an indication that the credential might have been created without the user's knowledge. This should however only be a recommendation since it might not be a desirable property in all scenarios. In practical terms, users might also need some time to add their new device to all of the groups, therefore a grace period might be needed. Taking the latter into account, correlating users' devices across groups is more of a detection mechanism than a prevention mechanism.\n\"In many uses of MLS, there are multiple MLS clients which represent a single user (for example a human user with a mobile and desktop version of an application). Often the same set of clients is represented in exactly the same list groups. In applications where this is the case, clients can compare the set of devices of each user across groups. This would make it much harder for a malicious AS to issue fake device credentials for a particular user because clients would expect the credential to appear in all groups of which the user is a member. If a device credential does not appear in all groups after some relatively short period of time, clients have an indication that the credential might have been created without the user's knowledge. As this may take some time, correlating users' devices across groups is more of a detection mechanism than a prevention mechanism.\"\nI lightly edited NAME prose and turned it into PR .\nMade a quick pass. I think we should stick with \"clients\" rather than \"devices\"."} {"_id":"q-en-mls-architecture-7096b529fc30fd4f537d44827ee1f0a2d7aa02a2ee477520d2eb5105e5ea460f","text":"The references at the end of the Security Considerations weren't in kramdown format. Fixed."} {"_id":"q-en-mls-architecture-b9ff2cd3ca3730cd5e83be228827819d64f0780e80867fc82c8035de6344e974","text":"For the authors' consideration: Section 3 A member with a valid credential authenticates its MLS messages by signing them with the s/its/it's/ mechanisms for a user to monitor and control which keys are associated to their identity. s/to their/with their/ Operational Requirements A extension can help maintain interoperability s/A extension/An extension/ Whether there should be a extension in groups. s/a extension/an extension/ Forward and Post-Compromise Security MLS partially defend against this problem by active member including freshness, however not s/defend/defends/ s/member/members/ Mandate a key updates from clients that are not otherwise sending messages s/a key/key/ Compromise of AEAD key material authentication is not affected in neither of these s/neither/either/ Compromise of the Group Secrets of a single group for one or more group epochs of the compromised party with no ability to an honest group to recover message secrecy. s/to an/for an/ Compromise by an active adversary with the ability to sign messages the attacker can perform all operations which are available to an legitimate client s/an legitmate/a legitimate/ Compromise of the authentication with access to a signature key Beware that in both oracle and private key access, an active adaptive attacker, can follow th s/attacker, can/attacker can/ Security consideration in the context of a full state compromise They also are providing the strong authentication guarantees to other clients s/providing the strong/providing strong/ hence we consider that their protection by additional security mechanism should be a priority. s/mechanism/mechanisms/ Overall there is no way to detect or prevent these compromise, as discussed s/compromise/compromises/ Privacy of delivery and push notifications For secure messaging systems, push notification are often sent real-time s/notification/notifications/ Authentication Service Compromise MLS client (which is under the control of the user) have often the ability s/have often/often have/\nApproved with nits.These changes look good to me."} {"_id":"q-en-mls-architecture-b5d875644a523bb596539f44bc8712ec216edc069d32559fbe43dc3c50dfdbe4","text":"I think the spirit of this change is good, but the last two sentences are confusing talking about the client trying to join and an existing member trying to prevent them from seeing anything confidential after joining. My initial reaction to this idea was always that post-hoc removing someone who should never have joined is a terrible architectural decision. Post-hoc removal either allows an undesirable member a window to see messages sent by some members, or it requires consistent policy across members. If there is consistent policy across members the members should simply treat the Commit from the undesirable client as invalid. Rather than further condone post-hoc removal (which doesn't require any normative change to the MLS protocol), why not just delete mentions of post-hoc removal?\nCompromise - Deleted the sentence that talks about the security implications of post-hoc removal, left the mention of it simply being possible.\nFollowing on from discussion in Even once a GroupInfo has been published to a prospective new joiner, there are still a few opportunities to apply authorization policy to the join: The DS can block the external Commit from reaching the group members (requires intelligent DS) The clients in the group can all refuse to apply the external Commit (requires consistent policy) Any client can remove the member after allowing them to join (might expose some group comms to the joiner)\nComment by NAME This seems rather risky? What could such a member do in the short time between getting added and getting kicked? Would they see message history? Group membership? How many devices a user has? When members were last online? Why support such a group type? Wouldn't it be better to not support this and have an automated \"special member\" being a bot that handles member addition?\nUntil someone else sends a message, a new joiner doesn't learn anything non-public. If you think about it, how would they? They're outside the group before they send an external join, and the external join just establishes a new shared secret with the group for future communications. More specifically: Message history - MLS's forward secrecy guarantees mean that new joiners don't get message history. Group membership - This is public anyway via the GroupInfo that an external joiner needs to join How many devices - To the degree this is exposed, it would be in the group membership Last online - MLS doesn't have any notion of this As for an \"add me bot\", it's a useful idea, but not applicable to a bunch of cases. The bot has to be a member of the group and has to be always-online. So now you have the group's secrets on a server somewhere, instead of just in the communicating endpoints, already an additional point of compromise. And it can't be the DS's server, since the whole point is to protect against the DS.\nThis statement in the draft is just plain wrong. In our implementation, we actually do validation of External joins at the application layer. I addressed this in (dating from WGLC) which says the following: \"External joins do not allow for more granular authorization checks to be done before the new member is added to the group, so if an application wishes to both allow external joins and enforce such checks, then either all the members of the group must all have the ability to check and reject invalid External joins autonomously, or the application needs to do such checks when a member joins and remove them if those checks fail.\""} {"_id":"q-en-mls-architecture-a0b5f6adde1bd2637e21d7d557d0275d81dbc32666cf553201dbfc473a68c311","text":"NAME noted in URL that an important function of the AS is to ensure that clients come to consistent evaluations of credentials. We should make that recommendation in this document, including the fact that consistency is needed when validity can change over time."} {"_id":"q-en-mls-architecture-5c50c339f78831f3032db82b4c5e601c863b535cbe351eb4cec0736fed78c5c8","text":"This replaced PR which has broken merge conflicts.\nI've reverted this. It was prematurely merged. I'll open a new PR once I have had time to examine this.\nThis was discussed in the interim today and is the same as the content that was in . Thanks, -rohan Rohan Mahy l Vice President Engineering, Architecture Chat: NAME on Wire Wire - Secure team messaging. Zeta Project Germany GmbH l Rosenthaler Straße 40, 10178 Berlin, Germany Geschäftsführer/Managing Director: Alan Duric HRB 149847 beim Handelsregister Charlottenburg, Berlin VAT-ID DE288748675\nThat's fine, but I still want to go over it before it merges."} {"_id":"q-en-mls-protocol-10b6d0e2cf000008a968b90ea49d9c894f1791635080604730d8291d340ddd57","text":"This does not handle the removal of DH uses instead of KEM which will be done as part of Issue / PR ...\nI would be happier if this PR were moving the draft to use draft-barnes-cfrg-hpke, which is one of the changes I had in mind for -04. Do you think that's premature?\nThis text is wrong:\nThis could also be addressed by using draft-barnes-cfrg-hpke.\nThis has been fixed by\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today"} {"_id":"q-en-mls-protocol-217912dc5a0c639c0a0d9f9af2b3e7a3b79d4407958a01695a62b78334c95c22","text":"Fix for\nNAME This might not be completely ideal, because of the group state but it currently does not change the key schedule, I am merging this because we need it for correct interop but feel free to get back to discuss a better way to do this in details.\nThere are at least two bugs in the message protection section that make it unimplementable: It calls for , but the third argument to as defined is a , not an octet string. It refers to a function , which is defined in TLS, but not here. ISTM the simplest way to fix these problems would to define and use it (1) as the basis for , (2) for deriving sender root secrets, and (3) for deriving keys and nonces.\nFixed by"} {"_id":"q-en-mls-protocol-2e5f6fe53f5428976b1e93d145ea70384ff66fd6db8831958174f68263cbd959","text":"Initial attempt to solve . This defines the KDF as discussed in and relocates the DH computation obligation and verification as part of the Derive-Key-Pair function definition.\nFixing my own review comments here, since NAME appears to be offline and we need to publish -04.\nThe PR from NAME introducing a KDF was merged before I had a chance to review it. There are a number of problems with it: Instead of calling for an abstract KDF, it should define one, as we have with the use of HKDF elsewhere We can't just say , since KDFs produce octet strings, not private keys. We need , not just . This doesn't match what I recall discussing on the list. I think what you want is actually as follows: Or in prose:\nThat was exactly the question I was asking on the list, i.e., if we need an explicit \"derive-keypair\" function. The idea with deriving the private key directly was to avoid a redundant derive-key operation. I'm totally fine re-doing the PR to represent either approach.\nMerging that PR too quickly is on me. For the parent derivation, we can just use what the formal model for TreeKEM does, which is roughly the Richard points out. The part I don't like about this, is to define in the MLS specification as it depends on the HPKE internal algorithm. The TreeKEM formal specification, it is done the following way (because we know that the HPKE interface we use can take random bytes as a secret key): In the MLS specification, I suggest we avoid defining anything new but point to the HPKE spec instead. As it provides the KEM scheme, it should also provide ways to derive the from an octet string and the function that transform an to a . I suggest we define, in the HPKE document, something like that we already need anyway and eventually that maps the random bytes to an . For X25519 we don't even need the second function but generically we probably need it. That would lead to define the following in the MLS spec: NAME ?\nI don't think we can delegate this to HPKE. HPKE doesn't have a need for anything like Derive-Key-Pair. We do, so it's on us to define it. The point NAME raises about avoiding unnecessary operations is relevant here: If we're forking the hashes like this, then we can remove the hashing part from Derive-Key-Pair, so that it's really just a definition of how you convert from an octet string to a private key (and thus its public key).\nI disagree, it is not for MLS to define the and functions for all asymmetric schemes possibly used in HPKE. In MLS we should have to do only what TreeKEM does, aka, 2 KDFs, one outputing the parent's secret, one outputing the KEM encryption key, and that's it. I am willing to accept defining here but this is already odd.\nIf it's not our job, whose is it? It's not HPKE's job, because they don't have a need to convert byte strings to private keys. Some document needs to define this conversion in order for MLS to be implementable.\nI agree we are in trouble here :) But I still wish that if we can avoid it, we should.\nI agree that the KEM schemes themselves should define these things, like Curve25519 does. But it's like two lines of text. I'm not too bothered with it.\nI think I'm a bit lost here. How do we go forward? Regarding the use of an abstract KDF: I was mostly just adapting the existing text, which used an abstract hash function if I recall correctly. I didn't touch the texts that got more concrete because they seemed to be a little outdated anyway. That shouldn't be a problem though.\nNAME Ok, looking at this now, I am not sure what we decided in the end. I would be ok to define as you said but as it is KEM specific so there is no good solution for P256 or for whatever PQ KEM that would have a specific way of generating keys from pseudo random bytes...\nNAME any suggestion here\nFixed by"} {"_id":"q-en-mls-protocol-9d768a35170a7f97c81c4c46d9e882f3c9c006a4e1348f8ed630b3b132a38d69","text":"Overall, this seems OK. Just using \"InitKey\" would be clear and less wordy.\nI tried this and that led people to confuse and\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThese keys are Client one-time use keys, so \"UserInitKey\" seems like a strange name.\nI'm fine with ClientInitKey, as long as a client is not only a physical endpoint. Changes in the architecture doc should make this clear soon."} {"_id":"q-en-mls-protocol-0fa37707eb5ef9b7a5e26e9127a8d2783ef07f3337ccc4a1a6257a363ed178a7","text":"Update change log for draft-07 No conflicting PRs so I did break many lines that were too long. Fix a few minor editorial issues after merging the tree-based application key schedule."} {"_id":"q-en-mls-protocol-f667bab8618acdfd804c2c91f52c1693bb723e96cd5eadc8afe1b81cdb1243e1","text":"Here's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 8 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 5 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also ."} {"_id":"q-en-mls-protocol-bfbcce22213678ebe8db2f928b3178bf5affab4dff43708782ed70f48b817cce","text":"Depends on\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*\nLong-term inactive users undermine the FS and PCS properties of the protocol. Obviously, users can remove each other if they notice that a participant is inactive. We should consider whether we want to allow the server to do such a removal.\nDiscussion at interim 2019-01: Could do this as \"server-instructed\" vs. \"server done\" i.e., server instructs a client to do a remove But this causes some ambiguity w.r.t. the rest of the group The only difference between Remove and a server-initiated variant would be signature Other use cases: User deletes account User is no longer authorized to be in group Application would need to set policy about whether / when server-initiated actions would be allowed\nI'm assigning this to draft-04 under the theory that the signature changes that will come about as a result of will make it straightforward to have an additional key for the server that can be used to sign Adds / Removes. If that doesn't turn out to be the case, this might get deferred.\nAfter discussion with NAME and NAME There will be a need to signal that a non-member key is being used, e.g., with some reserved values Do the participants in the group need to agree the set of allowed non-member signers? If some members accept a signer, others don't, then you can get partition -04 will focus on Remove, not Add, and punt on the agreement question; we assume the application maintains consistency of the view of authorized signers.\nWe should push some of this to the application layer in order to not introduce a new handshake message with problematic authenticity (agreement on the list of non-members who can sign handshake messages). The server could publish an \"intent to remove\" that will be honored by the first client to come online. The actual Remove HS message will be issued by a member of the group. It can additionally be attached to the server intent to remove, so that clients can convey more contextual information to users. Example: Server issues the intent to remove Alice from the group. Bob comes online first after that and send a regular Remove HS message to remove Alice and links it to the sever intent. Other members of the group can now display \"Alice was removed\" instead of \"Bob removed Alice\" to the user. In this example Bob is the first member to come online, but it could really be any other member. This has the advantage that the protocol remains unaffected as such, while the desired behavior is still achieved.\nCurrently, in order for a new member to join a group, an existing member needs to add them to the group. Certain use cases work more naturally if a new joiner can initiate the add process. I think that broadly speaking we want a process here where someone outside the group can (1) request to be added and (2) send to the group, but cannot receive from the group until added. Some things to consider here: Anything that is revealed to a joiner that joins this way is effectively public. Should they be able to see, e.g., the roster? How should these interactions relate to the key schedule? Assuming there's some message, should it be included into the transcript at Add time? What happens if there are multiple parallel requests?\nAt the 2019-01 interim, we discussed a \"reversed Add\" flow, where: Group publishes an InitKey The new joiner sends a UIK to the group and establishes a shared secret with the group This initiates a period where the new joiner can send, but nobody in the group can Before anyone in the group can send, they have to send a Welcome for the new joiner ... which initiates an epoch where they are fully joined\nIn certain use cases, such as enterprise messaging there may/could/might be a problem if \"Any member of the group can download an ClientInitKey for a new client and broadcast an Add message\". An enterprise may wish to enforce access restrictions for certain information such as ClientInitKey. Just thinking about security related issues that may be most easily noticed by corp IT types. On the other hand this may be best dealt with outside the protocol scope.\nNAME I think this is fixed by correct? A user can propose to Add herself.\nPartly addressed by . To complete the story, we need a \"send-to-group-from-outside\" mechanism. Leaving this open until we have that.\nI don't think a \"send-to-group-from-outside\" mechanism will be possible, given that we try quite hard to prevent any key material from inside the group from ever being published.\nThe signing key of external actors will be contained in the leaf in the case it is a CIK."} {"_id":"q-en-mls-protocol-45cd311b9f960d27661b8a43352e8bd68e3a1b95e57229259ccbba42aa06f2ad","text":"This PR begins filling in the IANA Considerations section, starting with a Ciphersuites registry. This addresses the concerns in as discussed at the interim (2019-10), namely by reserving for vendor use chunk of the code points space large enough to be selected at random without huge risk of collision (2^12 values). Depends on\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*\nEnable, e.g., P-256 with both AES-GCM and ChaChaPoly Re-use code points from TLS\nWe should discuss that today"} {"_id":"q-en-mls-protocol-87ac8082b0a886bbc4f70370ab515dc55db0cffb45d5ce667b2e33caac74be82","text":"In the latest HPKE draft (draft-irtf-cfrg-hpke-04), \"Initiator (I)\" was renamed to \"Sender (S)\"."} {"_id":"q-en-mls-protocol-f74fc1496661537e069e77d29cc66333c72852e72ad39f28032ab6d4909a1815","text":"NAME - I seem to recall you had some opinions on node hashing?\nRight now the LeafNodeHashInput struct uses the index of the node among the leaves, while the ParentNodeHashInput struct uses the index of the node among all nodes. This means that, for example, the index value will be the same for the second leaf and its parent ( in both cases). We should probably just use the node index in both cases.\nURL is a better solution"} {"_id":"q-en-mls-protocol-2685a8a390ba227c0ffe5afe2b6c35c14580f19d9bc6af4d4398d7918ed26877","text":"Just to recap the options as I understand them: a) Today: encryptedsenderdata = AEAD(senderdatakey, senderdatanonce, (groupID, epoch), senderdata) encryptedcontent = AEAD(appkey[i][j], appkey[i][j], (groupID, epoch, encryptedsenderdata), content) b) Sample sender data nonce from ciphertext (saves explicit nonce) encryptedcontent = AEAD(appkey[i][j], appnonce[i][j], (groupID, epoch), content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata) c) Drop auth tag from content encryption (saves explicit nonce and content auth tag) encryptedcontent = Enc(appkey[i][j], appnonce[i][j], content) encryptedsenderdata = AEAD(senderdatakey, sample(encryptedcontent), (groupID, epoch, encryptedcontent), senderdata)\nEarlier, we had considered a \"masking\" approach for sender data, much like QUIC does for . This approach would save 28 bytes of overhead (nonce + tag), but was abandoned because of uncertainty around the security of the masking approach. Since then, Bellare et al. published , which includes what is effectively a proof of the security properties of the QUIC scheme. The QUIC scheme is essentially scheme HN1 of the paper, the main difference being that QUIC encrypt not the nonce directly, but a packet number which is used to compute the nonce. Our application would be similar, since we would encrypt metadata used to encrypt the key and the nonce. Given this new analysis, is it time to bring back masking?\nFixed by"} {"_id":"q-en-mls-protocol-3e7db6000430997640df68206d7d91b70c971aeb439ed2285e604a0a56c6fd7f","text":"The credentials we have defined at the moment only contain a single public key and no information regarding what signature scheme that keypair works with (even though the Spec actually says it should include the signature scheme as well). It would be nice, however, to support potential CredentialTypes that, similar to KeyPackages, include multiple supported signature schemes, each accompanied by a corresponding public key. The Key that should be used in any given group is then determined by the CipherSuite of the group (which includes a Signature Scheme). When choosing a KeyPackage of a new member, it has to be one that contains a Credential which supports that Signature Scheme. Note, that I'm not suggesting that this be the case in all Credentials, or even that we change the BasicCredential. I understand that it's possible to have multiple credentials per identity, but in some authentication settings it can be beneficial to have a 1-to-1 mapping between Credential and identity.\nKeyPackages don't support multiple signature schemes? In what scenario is it beneficial?\nAh, right. I think that was the case in the past, but you're right, they don't. Of course it's always possible to just bundle a bunch of certificates, each supporting a different signature scheme and then treat that as your \"Multi-Scheme Credential\" in the grand scheme of things. However, if you end up rotating them a lot, where you have to sign them and potentially store the whole chain for a longer period of time, being able to support multiple signature schemes in only one credential means less overhead.\nCredentials are stored in each group's ratchet tree though, so you're keeping a bunch of unnecessary key material in the group's tree which everyone has to store\nThat's fair. And it's not something you have to do if you don't want to. I'm just proposing for MLS to support that type of Credential in general.\nTo support hybrid PQ/standard schemes at any point in the future, allowing for multiple schemes is reasonable (in addition to the above arguments for it). Most applications probably will not use the feature, especially to start with, but providing support for just the option is not going to cause any of the problems noted.\nPQ schemes can be used by switching the entire group to a new ciphersuite using those schemes. Requiring everybody in the group to store a bunch of unused key material isn't reasonable\nThe statement was not about PQ, but hybrid PQ/standard schemes. Those require use to two signature schemes, for which both types of keying material are used.\nThen you need a ciphersuite that specifies a hybrid scheme.\nIt would be ideal to have a ciphersuite specifying a single hybrid scheme, but even that does not imply use of a single credential, unfortunately. In fact, hybrid schemes may rely on two separate credentials, with the ciphersuite specifying the hybrid algorithm for combining them. Since the \"main\" method of building a solo hybrid credential is under patent, there is also additional motivation for applications to use two.\nI concur with Brendan here. To be consistent with our earlier decisions to (a) keep KeyPackages to one ciphersuite and (b) include the signature algorithm in the ciphersuite, we should keep credentials to one signature algorithm as well. Propose closing this PR without merging.\nDiscussed at 2020-10-06 interim, TODO: Re-add SignatureScheme enum, re-using the TLS signature schemes Clarify that the main job of a Credential is to provide a public key for verifying messages How it provides the credential might be different per credential type e.g., in BasicCredential, it's provided directly, X509Credential, have to parse the leaf certificate\nLGTM with a couple nits."} {"_id":"q-en-mls-protocol-dd376704c9e3620aed62113f8646ad9ef5354795e87eb590d6227e417ec733cf","text":"cc NAME\nNAME is the build expected to fail?\nNo, but the problem seem unrelated to your change, so we are all good.\nLooks like it was a transient failure. Re-running resulted in success.\nNAME points out that the HPKE version is not pinned. This is probably needed for interop. Version -07 should come sometime this week, so that seems like a perfectly fine version to use. I can send a PR when that version is cut.\nFixed by"} {"_id":"q-en-mls-protocol-ffe0afb144f9088be7cae530b9d9f367cee4cbb6dc1f47a9dd92e9f10bec7d86","text":"I noticed a bunch of typos in the \"IANA Considerations\" section: the \"ciphersuites\" and \"credentials\" were probably copy-pasted from the \"extension\" one, and some parts were not changed correctly. This PR fixes these typos.\nDiscussed 20210526 Interim: editorial will be taken up by editors.\nThanks!"} {"_id":"q-en-mls-protocol-cd897806c0441e1939faf6d5dd758b672099c3efa9d887d938e1ef835ddfd1e1","text":"In , NAME noted that the concatenation approach used to combine multiple PSKs is not well validated. At the 2021-10-04 interim, we discussed moving from this approach to a \"cascaded HKDF\" approach, with the idea of doing the same thing here as the main key schedule. This PR implements that suggestion. One nice implication of the algorithm here is that when there are no PSKs, it produces the zero vector as a degenerate case. So you could implement it pretty cleanly:\nThe spec currently doesn't distinguish much between an external and a member commit w.r.t. how proposals by reference are handled. In theory, the server could keep track of proposals and send the external committer all proposals that were sent up until the time they want to commit externally. However, there are a few difficulties in committing proposals by reference in an external commit. Membership tags: The external committer doesn't have the ability to verify membership tags on proposals. This is probably not a show-stopper, as proposals still need to be signed by a group member, but if nothing else, this should be noted in the spec. Leaf positioning: The spec was changed such that the committer is added into the tree without an Add proposal being present. If there are other Add (and Remove) proposals present, it is unclear in what order they should be processed. Should the committer be added when the path is processed? Or should it still be processed as if it were an Add? There are probably other things that can go wrong when committing referenced proposals externally, but those are the first that come to mind. The simplest solution would be to simply ignore proposals by reference, although this would break the current principle that a commit must always include all valid proposals that were sent during the epoch. The consequence would be that Parties that sent proposals will have to re-send them if the epoch is ended by an external commit. It should be noted, that a Preconfigured sender will have no way to detect if their proposal was committed or not.\nI'm not sure your worries here are very real: Membership tags: I don't think the membership tag issue matters, because the existing members will reject the proposal (and thus the commit) if the membership tag is invalid; at worst the joiner gets themselves wedged into a group by themselves. And as you say, they're signed, and they keys can be tied back to the tree hash in the GroupInfo. Leaf positioning: We need to clear this up regardless of proposals-by-reference. It seems like basically having a synthetic Add at the beginning or end of the proposal list would work fine. Preconfigured proposer: This is a problem only when Commits are encrypted, so it's not a problem in the external commit case (since an external Commit cannot be encrypted). Nonetheless, the point that an external joiner can't fully verify a Proposal seems to argue pretty strongly for forbidding proposals by reference. I don't think this even violates the semantic of \"include all valid proposals received in the epoch\", since (1) that requirement is receiver oriented (it's not enough for the proposals to have been sent) and (2) the external joiner can't tell if the proposals or valid. Overall, I would lean slightly toward removing proposals by reference. And if we do that, it might even make sense to just define an ExternalCommit message that packages together the few things that are still allowed.\nI'm not sure I understand how the receiver-orientation of the \"all valid proposals must be included\" rule matters to its application to this case. If we remove the requirement for external commits to include proposals by reference and don't soften that rule to exclude external commits, receivers will consider all external commits invalid in the presence of valid pending proposals by reference. In any case, I'm also in favor of keeping it simple and restrict the External Commit to a few, well defined components that do not include any proposals by reference.\nDiscussion on working call: \"No proposals by reference\" kind of follows from \"all valid proposals\", since the joiner can't necessarily determine that proposals are valid Maybe allow attempts? Might not get feedback to retry, though. Whole point of external commit is to allow async. Agreement to prohibit proposals by reference in external Commit\nI believe there's a small mismatch between the text and the Key Schedule figure. The text says that if there are no PSK proposals, the should be a zero-length string, while the Key Schedule figure seems to indicate that it's a string of zeros of length .\nFixed in\nOne brief comment, otherwise looks good!"} {"_id":"q-en-mls-protocol-a686d3a057864533c952e88d84cef505c37960f81355af4a5e05f3b9b896de1d","text":"The definition of blank node needs to move ahead into ## Ratchet Tree Terminology as it is used in the section immediately following (## Views of a Ratchet Tree)."} {"_id":"q-en-mls-protocol-546ade3accadd97ada66250ca6d0160993d246d8a90338c99711a91dc95c3486","text":"This PR introduces a notion of a proposal type being \"path safe\", in the sense that it is safe for the path to be omitted. (Better phrasing welcome!) The field is required default, and allowed to be omitted only if all the proposals in the Commit are path-safe.\nThanks for taking care of this fix! One suggestion: Instead of having a global list of which types are path safe and which ones aren't, we could make it a property of the proposal. Maybe something like this: Regarding the name, the only thing I can think of is that we could invert the property and call it : In fact, I think I'd be slightly in favor of the latter proposal.\nDiscussion on virtual interim: OK to keep using type TODO(NAME \"path safe\" -> \"path required\" Otherwise clear to merge\nThe authoritative definition (in Section ) of when to include a is not the same as the summary below. For example, if an implementation adds a custom proposal type, according to the summary, a path is required, whereas according to the definition above it, no path is required. Ultimately, it depends on the nature of the proposal and the desired security guarantees if a path is required. However, I think the safer option is to require a path by default and maybe leave it to the definition of the respective custom proposal to explicitly specify if a path is not required."} {"_id":"q-en-mls-protocol-f71b4c05c2af9af0d9c8047e4d983c82f6148ab6b26c14e1b1b84fdf5e46fa98","text":"Based on , it should be made explicit that sub-group branching requires a fresh key package for each member. I think there is still a conflict of wording here that I would like feedback on because of this line (which is what triggered me thinking there was a bug)\nThanks for the PR and thanks for digging into this. I think what the sentence is meant to say is that the receiver MUST check that the members of the new group are a subset of the members of the old group at that specific epoch. Since we're using new LeafNodes in the new group and we don't have a fixed notion of identity (at least not within MLS), we have to refer to the application/the AS to ensure that the identity expressed by the credential in each LeafNode in the subgroup has a corresponding identity (i.e. one that represents the same party) expressed by a credential in a LeafNode in the original group. Maybe something like: This leaves the possibility of a collision, where the identifiers in a given leaf in the new group would be valid updates to more than one member in the old group. This should probably be forbidden in general, although outside of this case it's not really a problem, because updates are always specific to one leaf. Alternatively, we could somehow include LeafNodeRefs that specifically indicate which member represents which other member. Or just leave it up to the application to sort this all out.\nCurious what you think about saying the the credential has to be equal? We can determine equality of the credential itself just by doing tls serialization.\nSure, requiring the credential to be exactly the same would be a simple way to figure out if it's the same party in both groups. However, I can imagine that some applications will want distinct credentials for each group. If we mandate that credentials are the same, this would force the application to use the same credential in every key package/LeafNode. Since subgroups can potentially be branched at any time and in any group, all key packages that a party publishes then have to have the same credential as all of the groups that party is in, so another party branching a subgroup can get hold of a KeyPackage with the same credential as the group the subgroup is branched off of. Personally, I'd prefer asking the application to check that the identities of the new group are a subset of the identities of the original group.\nI can see that being an issue, although I am somewhat worried about us not being opinionated in same way about how to determine equality here, mainly because of interop type things. We punt on how to validate the credentials as well in general so I guess this is in line with prior decisions\nHey NAME and NAME check the change I just posted. NAME and I put that together, it's a bit tough to get a generic definition of equivalency.\nThere is currently a logical flaw in the subgroup branching flow. In order to branch, you need to compose a message . A Welcome message includes which is defined as follows: The problem with this is that the field of type cannot be made for existing members of the group because the leaf nodes are now types. is now only used to add new members to the group, so we can't reuse this for sub group initialization.\nI'm not sure I understand correctly. Are you suggesting when branching a group off of another group, one re-uses the leaves of the old group? If so, that's not how it's meant to work. The new group created in the process of subgroup branching should be independent from the \"parent\" group safe for the PSK, which all member have to provide upon creation of the group. In fact one should never use the same key material in the leaves of multiple groups except when the supply of key packages that a party provides via the DS is exhausted (see Section 17.4).\nThat was part of my question. When there were key packages in the leaves, you theoretically could fork the group from a given point in time and reuse the current tree, but I wasn't sure if that was a proper solution or not. Sounds like it isn't, but either way, I think there should be clarification in the document about this. Let me write up a quick PR on it.\nNAME opened URL Would like your feedback there. We have conflicting wording here as well. What threw me off in the first place was the line indicating that LeafNodes should be the same in the sub-group, and if we are fetching new key packages they will not be the same.\nNAME Konrad, could you safely derive new keys for the members of the subgroup instead? If so, that would be an enormous increase in efficiency and (especially in the federated case) make group creation timing consistent enough to do dynamically.\nNAME Nice idea! But it's tricky, since we're dealing with public key pairs that the creator of the new group doesn't know the private keys for. If we were only using HPKE schemes that support updateable public key encryption, the creator of the new group could indeed update the key in the existing group and send the \"delta\" encrypted to each corresponding member (which would still represent an overhead, but we might be able to optimize this somehow). This is similar to the RTreeKEM mechanism that NAME et al. proposed at some point. To go all the way here, we'd have to do the same for the public key material in the credentials as well, which is not something we had considered before. It would be a new primitive, something like an updateable signature scheme? Probably something to consider for HPKE/MLS 2.0. We could also allow the re-use of LeadNodes for this particular case. It would be the simpler solution and I don't think it would be catastrophic, since we have to assume that LeafNode (or KeyPackage) key material is re-used in the \"last resort\" scenario anyway, but this would definitely be something we should discuss on the mailing list, as it would break a pretty basic assumption we've made until now.\nJust catching up on this. NAME nice catch here. It seems like we have two options here: Require new KeyPackages (as in ) Invent some way to make a new group based on LeafNodes alone I probably agree that we should do (1) in the short term, but (2) also has some appeal -- it doesn't feel right to require new KeyPackages for folks you're already in touch with -- despite it probably being more spec text. So here's a sketch of (2) in the interest of completeness. Suppose we defined a message of the following form: ... which would pretty closely parallel to Commit. You would send it in MLSMessage and require a in MLSMessageAuth. Where Commit instructs the recipient to create a new epoch from the current one within the same group, Branch would instruct the recipient to create a new epoch that is the first epoch of a new group. In both cases, you would form an epoch / GroupContext as specified, then use the to ratchet into the new epoch. I'm pretty sure that would work and have the expected security properties, though obviously formal analysis would be appreciated. The main awkardness that occurs to me is that any members joining after the branch would have to provide new key packages. And obviously, it would require a fair bit more spec text. All that said, though, the exercise of working all that out has increased my impression that we should do (1) for now. If we want something like (2) later, it can be done pretty cleanly in an extension.\nFixed by\nWell put!"} {"_id":"q-en-mls-protocol-61fc75008638f820406d30a4656c4e5dbf157365d9ae23082337b34d1f2fe734","text":"I drew this to help explain the relevant patch in MLSpp to someone, and thought it might be helpful to other folks.\nExcellent! This is a very nice addition to explain the design introduced ."} {"_id":"q-en-mls-protocol-5f4ad8bbb0092fbd5e6ee3566f7a71413f6bf9a7aa26c499e67bd265ef19e3f9","text":"I'm not sure if this is actually necessary, but there usually seem to be issues in Merkle trees when the leaves are not explicitly distinct from the parents.\nI would find it nicer to describe things with a like in other parts of the protocol: It is equivalent to your modification. Stating things that way ensures there is no ambiguity: TreeHashInput is parseable, which is I think the property you want to ensure with this PR.\nUse enum instead and then ready for merge.\nInterim 2022-05-26: Nicer with an NAME to update to , then merge\nWhen computing the tree hash, should there be an indicator bit for whether the value being hashed represents a leaf or parent node?\nThis doesn't seem like a terrible idea. You could either add a field to LeafNodeHashInput and ParentNodeHashInput, or define an enum and do the same, or make a . For example:"} {"_id":"q-en-mls-protocol-9cad84b73b6aa96af483ead7c833b1d87338f8b67426804a65659a179e12e0b4","text":"Also covers the PSK part of\nInterim 2022-05-26: Ready to go once merge conflicts are resolved.\nThe collection of structs could be less chatty. Suggest PreSharedKeys is only used once, can be inlined in GroupSecrets Use GroupContext for the internals of GroupInfo\nNote: was fixed in In , we did: So only the following remain: Use GroupContext for the internals of GroupInfo\ncurrently described at time of use. In the \"Pre-Shared Keys\" section, we should just specify that it must always be a fresh value of length The ReInit provisions require , but those for branching do not. Presumably these should be the same? Currently, the processing rules for PreSharedKey proposal says “MUST be external”. This seems overly restrictive; for example, it rules out the scenario in Figure 8. Suggest “MUST NOT be branch/reinit” instead."} {"_id":"q-en-mls-protocol-f400e351345f956b13b6bb3163ec050f9d6bc13afe0ddf5404d0a8ef3a7d2783","text":"The enum was missing the limit. The wasn't using a TLS environment.\nNAME shouldn't ResumptionPSKUsage include (255)? enum { reserved(0), application(1), reinit(2), branch(3), (255) } ResumptionPSKUsage;\nFixed in\nSeems reasonable to me."} {"_id":"q-en-mls-protocol-14f3bca7d6d6ae8e0c2cb3e6e10f95ef74734570cc3eee186f7ed2f8867ddb74","text":"Interim 2022-12-08: Distinguish \"time-invalid\" from \"invalid\" There is a slightly more general issue: When catching up, there might be messages sent by members whose credentials were valid when the message was sent, but have since expired Recommend accepting messages from members with time-invalid credentials when catching up\nI think a note on consistency would work a little better in the architecture document, especially because it applies in the general case as well as the time-varying case. I filed URL\nLatest change uses \"expired\" to refer generically to all time based reasons for a transition. (So, excluding , as NAME suggests.)\nWe ran into this case, which does not seem to have a defined solution within the RFC and I'm wondering if this ambiguity should be resolved at the RFC level, or be left up to the application. Existing group with Alice, and Bob. Bob had a valid certificate upon being added to the group by Alice. Bob's certificate expires . Alice does a commit to add Charlie to the group. Bob accepts the valid commit to add Charlie. Charlie follows 8.3 and validates the tree. In this process if certificate expiration is enforced, then Charlie would drop the Welcome, which would not be optimal. Solution 1: Leave this behavior up to the application, consider it part of the AS. Solution 2: Add an explicit mention that credentials that expire should be considered valid after expiration once they are a member of the group. Solution 3: A committer MUST validate that all existing credentials in the tree are still considered valid at the time a commit is made. If an existing credential is no longer considered valid, then the resulting commit MUST contain a Remove proposal for the leaf containing that credential.\nOf course it would be better to have the same credential validation algorithm, but at the protocol level, there is no reason to constrain all clients to that, applications can enforce a consistent policy across clients. So, in my mind this is not symmetric between senders and receivers and solutions 1 and 3 are not incompatible, however 2 is not acceptable IMO. I would suggest the following: Alice MUST verify that she considers all credentials valid before adding Charlie and act if it is not the case (Solution 3) Charlie MUST check all the credentials upon joining and act according to its authentication policy* Charlie MUST NOT send any application message before she considers all credentials valid\nAre you suggesting that it should not be a requirement that each member of the group has the same validation algorithm? That would create forks in any case that individuals disagree on a particular credential's validity. We can of course punt to the application, but I feel like that should at least be cautioned against?\nI don't quite see how the validation algorithms are different. They are the same, just executed at different points in time. Or am I missing something here? In any case it would be unfortunate if Charlie couldn't send any messages until someone committed to their proposal. So I would propose to change point 2 of the algorithm suggested by NAME to the following: Charlie MUST check all the credentials upon joining. If any group members have expired credentials, Charlie MUST issue a commit to remove the affected members.\nI am saying that it is not a requirement. Most applications will want to use the same algorithm, but the protocol doesn't need to make that mandatory, it is just a convenience for applications. The protocol is designed so that each client can make its own choice. It would not create forks, as for example the case Alice decides to remove Bob, Bob is removed for everyone. I would be - ok - with the change proposed by Konrad but we need to be aware that this is a restriction on what clients can decide with respect to authentication (especially in a context where there is no AS).\nI think it's also important to remember that there may be a delay between when a user is sent a Welcome and when they process it. So when a user processes a Welcome, the first epoch they see after joining may be quite old and have expired credentials for that reason, and this epoch may not even be the most recent epoch anymore. So I would lean towards solution 2 actually, of \"members that are joining a group just need to ignore expired credentials.\" And if the application wants to have a policy of not sending new messages to a group if there are expired credentials present, then that's fine but not something we need to require in the spec. Edit: Or maybe we do want to specify that, but I'd be clear that it's a check done before sending new messages, not part of processing the Welcome.\nAlthough I started the dialog here around certificates and expiration, I think maybe it is useful to ignore the idea of \"expired\" credentials for now, because I agree with the comment by NAME that we should try to avoid being opinionated in that regard. I think what I am trying to get across is that regardless of what credential system you use that the protocol can only operate without forks if the validation of that credential is consistent across all members while also taking into account the asynchronous nature of the protocol. If each member of the group has the same state for epoch N, they should all independently determine that new credentials presented as part of a commit are valid or invalid regardless of when they process that commit. I'm viewing a credential being invalid as a trigger for rejecting a commit, which maybe in itself is a bias. Not sure if the above belongs in the protocol, the AS definition, or no where at all :-). If everyone thinks this is too restrictive of a statement I'm fine with that. Maybe a better solution to this issue is that there could be a separately defined standard way of dealing with certificates within MLS that can be more opinionated than the base protocol and introduce concepts discussed above like MUST remove of expired certs on commit, use an external packet timestamp for consistent expiration checking on the receive side, etc. I think a lot of applications would benefit from that and NAME NAME and I have been doing a lot of experimentation in that area.\nAfter a couple long chats with NAME and NAME here's where I stand on this. A guiding thought I have is this: IMO creds aren't just for getting in to a group. They should also be necessary for continued participation. After all MLS is designed for sessions that can last years. That means, I'm uneasy with an MLS design that allows me to join a group (epoch even) where someone has an invalid (e.g. expired) cred. Worse yet, I don't want MLS to let me send application messages to users with invalid creds at the time of my sending. With that in mind, how about this approach which tries to leave as much freedom to an app/AS while still giving guidance to an app/AS that wants to ensure valid creds are needed for ongoing participation in a group and, optionally, that forks are being avoided as much as possible. MLS clients MUST validate the creds at all leaves in a ratchet tree (via the AS) in the following cases: a. Sending a commit (validation done on new epoch's tree after applying commit). b. Receiving a commit (validation done on new epoch's tree after applying commit). c. When joining a group. d. When client wants to send an application message. Based on this MLS now gives 3 guarantees. Call an application/AS \"consistent\" if it ensures that for any given epoch and cred in its ratchet tree, the result of any client validating the cred is the same (i.e. regardless of the time the validation is performed and which rule of a. b. or c. triggered it). (Edit: Just to further clarify \"consistency\" doesnt mean validating the same cred but when taken from different ratchet trees should have the same result. It only concerns validations pertaining to the same epoch but performed by different clients at different times.) 1) Regardless of the app/AS's consistency and any insider attacks, clients only ever join epochs in which all members have valid creds. 2) Regardless of the app/AS's consistency and any insider attacks, clients will never send application messages to receiver with invalid creds (at the time of sending). 3) If the app/AS is consistent, all parties follow the protocol and all packets are delivered by the DS then no forks will occur. My justification for this approach is that it strikes the following balance: On the one hand, we now give guidance to app/AS devs for how they can enforce (1-3) if thats what they want. On the other hand, we don't actually tie an app/AS's hands at all (e.g. compared to mandating almost no checks ever). That's because MLS says nothing about how cred validation should be implemented. Hard-coding always returning true is an MLS spec compliant AS. So if an app/AS doesn't want the constraints imposed by the above rules it is still free to hard-code those validations as returning true; i.e. it can skip the checks if it really needs to. In summary: the idea is basically to tell app/AS devs when/which are the right checks to perform to get good security & availability but also to leave it up to them to make those checks meaningful. Finally, regarding \"consistency\": in my mind creds are validated relative to a context (e.g. a time-stamp for checking expiration, a root CA cert, a blockchain or even particular block for checking wallets and/or balances, etc.) One way to get consistency is for the app/AS to ensure that all clients use the same context when checking the creds in a ratchet tree regardless of when they perform their checks.\nIn my mind, applications are only ever in one of two states: A.) I'm reading backlogged messages, or B.) I've read all the messages and I'm waiting for new ones. When you're in state B, you can send messages and also check time-related things. When you're in state A, you have no idea when the messages you're reading were truly sent. We don't currently provide any way to communicate a \"context\" against which time-related things should be validated in state B. So I don't see how we can implement b or c from this list?\nImplementing b. and c. in some spec compliant way is actually trivial because the spec doesnt require \"consistency\" (in the sense that I defined that term in my previous post.) The only thing the spec says about consistency is that if you want guarantee (3.) then you need consistent cred validation. So e.g. always returning true when validating in cases b. and c. while doing something more interesting for cases (a.) would be fine; i.e. spec compliant. You just wouldn't necessarily get guarantee (3.) is all. But I figure what you meant is how to implement (a.) b. and c. to be consistent, right? And that'd be a good question that any app trying to avoid forks would have to think about. (Notice, the RFC already creates this concern for such apps with out my proposed changes because to avoid forks now you already need commit sender and receivers to agree on their validation.) In our opinion, we landed on leaving it up to the app/AS how it guarantees consistent validations because we think there are a bunch of different but reasonable ways this could be done. Yet, which of those is best (even among just the couple we could come with on the fly) depends on things like the app's security assumptions, communication models and access to third-party services, not to mention the details of how the AS works. To start off with, it wasn't even totally clear to us that the \"context\"-based syntax really applies to all reasonable AS designs. (Thats how we ended up on the notion of \"consistency\" as that is the more abstract and minimal guarantee we really needed to avoid forks. Context is just one way we thought of to get consistency.) But say the AS does conform to \"context\"-based syntax. Maybe the app/AS ensure consistency by only ensuring that the (possibly different!) contexts being used by clients still produce the same result. Further, even if we are more opinionated and assume an app/AS will ensure an identical context is used in all validation cases this still leaves room for many variations. Who chooses the context used for a given epoch? Maybe the commitor? Maybe the AS? Maybe the DS (which we already have to trust it to avoid forks)? Maybe a group-context extension? Also, how is the context distributed? Out-of band? By the AS? By the DS along with commit/welcome packets? Along with group contexts? Each made sense to us in the right setting... One thing we toyed with for a while was the requirement that the commitor authenticates the particular context C they used when creating a commit packet (e.g. by using C as associated data or a PSK or something). But, not only is that already pretty opinionated about how consistency should be enforced, it also wasn't clear to us what the actual point is. TBH at first I was for it at first out of instinct (and ignoring the being opinionated part for the moment), but NAME did a good job of convincing me that I couldn't actually justify it beyond \"it feels right\". What kind of, otherwise not possible, attack is this trying to defend against? In the end, the point is there were enough plausible but diverging options for how to get consistency that we didnt feel comfortable enforcing anything that constrained an app/AS's options. But it's not due to lack of ways to solve for consistency. We just didn't want to be opinionated in a way we couldn't STRONGLY justify.\nThanks NAME I'm not sure I agree with checking every time I send an application message as TLS sets a precedent for not checking in that case. I do agree though that I view each epoch as a session renegotiation of sorts, which means that you reevaluate if credentials you previously thought were valid are still valid according to whatever rules that are in place. I believe this conversation winds up in a place where its a very small statement in the main doc that explains how to avoid forks, and then an entirely separate document for each credential type (starting with certificates) that aims to set guidelines on how one could achieve various goals like not allowing expired certs etc. Im all for helping write that document as Wickr will be using the x509 system in production quite a bit.\nSolution 4: Bob has to do an UpdatePath before his Credential expires with a LeafNode with a valid Credential (certificate). If he doesn't, the other members should Remove him from the group. Same goes if Bob's Credential is revoked.\nNAME I think the core of the discussion at this point is more around the fact that the base protocol does not have any concept of \"expiration\" of credentials, so their behavior is undefined. Either we put an opinionated statement around it in the core protocol, or just punt to applications / another RFC. Thoughts on that?\nHi NAME , My opinion is that whatever credentials you have in an MLS group should be valid (including not expired, not revoked, not too early, etc.). Therefore it would logically follow: It is the responsibility of each member to make sure their LeafNode credentials are valid and consistent with the group policy at all times. Any member of the group that detects another member is out of policy or invalid can remove the offending member. To the extent possible a DS that detects that a member is out of compliance with group policy or no longer has valid credentials, can try to remove the member (ex: via an external proposal). The definition of \"valid\" could be done per-credential type or per profile.\nSide note: under the general rubric of \"validity changes over time\", revocation is also an issue! Overall, my thinking aligns with what NAME says here. Saying that the credentials presented by group members should be valid at all times is a concise, clear standard. It is arguably implied by the current requirement to validate credentials on joining (if you also assume that the group should be joinable at any time). The more difficult question is how it is implemented in practical reality. I don't think we can really write hard requirements around this. The only remediation is Remove, and we don't have other situations where we say a client MUST send a specific proposal. In part because this might conflict with application-level policies about who is allowed to send Proposals/Commits. And it seems like there are valid situations where availability might trump authenticity (cf. browsers' click-through on expired certs). Overall, this seems like more of an operational question than a core protocol question here. In terms of how to reflect all this in documents: The details of how to manage credential freshness seem better suited for an implementation guidance document. (I believe NAME is gathering ideas for one?) In the MLS protocol spec, we could add some high-level guidance, something like the following: I think that would fit well either in the Credentials section (between \"Credential Validation\" and \"Uniquely Idenitfying Clients\") or in the security considerations.\nWe could use the sender_generation field of the MLS message to signal what was the global app generation of the sender. Functionally, this allows receivers to know if they received all previous application messages from the sender. In the current design, if S sent M0, M1, M2 you can know if you missed M1 when receiving M2, but you cannot know if you missed M2. There are many ways of doing this: global sender counter, counter between two operations of the sender...\nWe seemed in agreement at the interim because it prevents the application message suppression attacks\nLooks great thanks NAME I like the separate treatment of expired vs. revoked. Looks good to me now!"} {"_id":"q-en-mls-protocol-1571b6331f7a5bad4d6c49d6e8d1008e3851ace38cbd2fa4ad4deff6e752ceee","text":"URL This definition of a handshake message is being defined in terms of a Proposal or Commit object, neither of which are defined in this session. Furthermore, PublicMessage or PrivateMessage aren’t defined either as this point in the text. To improve clarity, consider defining these key terms. The use of \"respectively\" in the text is suggesting that PrivateMessages don’t have integrity protection which is not accurate. (see the Cryptographic Dependencies section of the HPKE specification for more information). Is this Section 4 of RFC9180? If so, why not say that? What would be the circumstance where the application_id extension would be treated with equal standing as something from the AS? Why can’t this be a \"MUST\"? Unsaid is what to do if the proposal list is not valid. Please clarify. Is this normative behavior required, or could it be left up to application? The mixing of updates/removes on the same node and the SHOULD guidance already means that this \"recovery from an invalid node\" may not be consistent across clients/DS-es. Same observation on recovering from ReInit with any other proposals. Is there any additional retry mechanism? Let’s say the sender of the proposal doesn’t see it reflect in the epoch after it sent proposal. And then not in the next epoch. Does it try resending indefinitely? This setup seems a little simplistic. If the application data is encrypted to begin with, how does the adversary know the question being posed by Alice?"} {"_id":"q-en-multipath-b9e1a7ff797912c99ffa325a23dbb53d329cd6af282b1fb9cd5a96ccb2d56e05","text":"The draft states that Shouldn't be the result 6b2611489cba2b63a9e873e2 instead of 6b2611489cba2b63a9a873e2?\nYes, I submitted PR for this issue."} {"_id":"q-en-multipath-a8044c8155124f61777a19313e8431e1800b567146b3e741f91dbbfb9ed9a4c4","text":"From PR from NAME\nLet's do another editorial pass and evtl. some re-org first and then address this issue. I think we all already agreed that we only need one of the two. So we only need to decide with name we keep.\nI prefer to use the name \"Path ID\", as it is more intuitive in the context of multipath.\nThe conflict was solved. LGTM.Sounds good."} {"_id":"q-en-multipath-769ebbecde33b9d2595f2f03c88d8d61273b849946789d0efcfc7b9cba0659c2","text":"Let's give the facts to the implementers, without providing too specific guidance neither. and may also address .\nJust to not forget this, but this is mostly editorial. Probably present first the ACKMP frame, and then PATHABANDON and PATHSTATUS Drop reference to I-D.liu-multipath-quic as the PATHSTATUS frame is now integrated in the draft. Based on and , and If we agree on , we could also drop the text in PATh_ABANDON stating that on any path, not only the path on which the referenced Destination Connection ID is used.\nWe probably need to add some text in the scheduling considerations where control frames (PATHSTATUS, PATHABANDON, but also MAX_DATA,...) should be sent on a good working path, ideally with low latency.\nI would say the same applies to ACK_MP\nFor ACK we recommend (SHOULD) to send it on the same path because of RTT calculations and I believe the scheduling text discusses at already.\nThinking about this some more. I believe we should say NOTHING. The point of the standard is to specify expected behavior when a node receives protocol messages from its peer. The basic statement is that most frames, including control frames, can be sent on any path -- the only exception being PATH CHALLENGE frames and PATH_RESPONSE frames. So my preference would be: 1) say nothing, because the purpose of standards is to specify requirements for Interop, not provide guidance to implementors. 2) if we do say something, just say that these frames can be sent on any valid path, based on preferences of the application and the implementation.\nAs for ACKMP frames, the text says \"ACKMP frame (defined in ) SHOULD be sent on the same path as identified by the Path Identifier. However, an ACKMP frame can be returned via a different path, based on different strategies of sending ACKMP frames.\" That's an OK compromise, leaving open the possibility for implementations to use any strategy they want.\nAlternatively, spell out the considerations to make when picking a path? Does it matter, if so, why? This would be useful to know when new frames come along in the future and need to think for themselves how multipath might affect them.\nSure, but at this stage we have preciously little experience. For example, I have a user request to specify affinity between a stream and a path. I can certainly provide them an API to do that, but this is basically research. And then, affinity applies to stream frames, but whether it should be applied to control frames is debatable. I am concerned that the guidance that we provide will be so generic as not to be useful, while still requiring long debates and delaying the publication of the draft. I think guidance is best provided in research papers, and possibly in text books.\nI agree we shoukd avoid the potential time sink. I wasn't suggesting guidance but rather a fairly succinct statement like \"The path that frames are sent on is a local choice, which could effect the behavior or performance of the QUIC transport or delivery of application data. There is no guidance provided by this document on strategies for path selection for frames beyond the control frames for the multipath extension itself.\" And perhaps gathering consensus on that now avoids people asking the question in later review rounds.\nNAME I agree. Your proposal is basically equivalent to my suggestion that \"if we do say something, just say that these frames can be sent on any valid path, based on preferences of the application and the implementation.\" Your text is good.\nSorry, I did read your comment and immediately paged it out. Credit to you for proposing that option earlier\nLooks good, but the description of congestion is imprecise. Either drop it or fix it."} {"_id":"q-en-multipath-7c4347223a835a0bceb7c907b154f8a7c914ba9d48ee27243d580ca8c823b5dc","text":"Each line is limited to 72 characters.\nI verified that this PR produces the draft that we expect. I did one small change: remove spaces at the end of lines.\nThere are many isntances in the markdown source where a whole paragraph is entered in a single line of text. This makes the reviewing harder, and messes up the \"what changed\" view when reviewing PRs.\nJust to be on the same page, how should we format the text? I see two possible ways. Hard-constraint the number of characters per line to a given limit (e.g., 80) Make one sentence = one line I have a slight preference for option 2, but I can of course live with the other or a mix of both.\nI would prefer something like 72 characters, because the last 8 positions are used for numbering the cards in your deck."} {"_id":"q-en-multipath-f9ff98016559e5fe7a5d2ad008a38a93694077a245a8c0346bc1585a10a819c7","text":"My understanding is that if I am a receiver that wants to proactively cause a sender to stop sending, I should use the explicit way of sending a path abandon frame. The idle timeout handles the case when a receiver is unaware of a path change that leads to communication failure. If that is the purpose, as long as the receiver can receive packets in the sender-to-receiver direction and acknowledge on another path, it looks to me that we should still allow the sender to use that path. But I feel if we do that, we are diverging from the bidirectional nature of a path described in the draft, which worths more discussion.\nI agree with NAME that if you want to close a path, you should use the abandon frame. This sentence covers the case where a path was idle for a while and therefore should be closed silently as it might not work anymore anyway. However, if you have received packet on the path or have an indication that packets you've sent on the path were received by the other end, the path is clearly not idle and working. I also agree with NAME that that means you only know for sure that the path is still working in one direction and that might need further discussion.\nThis was discussed in issue and PR .\nRegarding the maxidletimeout, RFC9000 says \"To avoid excessively small idle timeout periods, endpoints MUST increase the idle timeout period to be at least three times the current Probe Timeout (PTO).\" As in multi-path QUIC, the PTO is per-path, when maxidletimeout is used to close a connection, it makes sense to set maxidletimeout = 3max(PTO1, PTO2). But when it comes to close a path, I am not sure if the timeout for path1 should also depend on the PTO of path 2?\nI think there are two idle timeouts: the \"connection idle timeout\" and the \"path idle timeout\". For the \"connection idle timeout\", I agree that it should be at least 3PTO_max. Triggering this timeout closes the connection. For the \"path idle timeout\", this might be a way to have a stronger guarantee to stop using paths (closing them) after some inactivity than the text proposed in . Still, if we define such a mechanism, we need to know whether all paths should have the same path timeout value or not (in the latter case, a transport parameter might not be the best way).\nI would avoid having a different timer for a path than for a connection. That would make the protocol more complex without a lot of benefits. If a host wants to use a shorter timeout per path than the connection timeout, it can simply define it and use a PATH_ABANDON frame to indicate that the path is abandoned. However, we need to be clearer in 3.2.3 to indicate that a path can be considered valid if either packets are received on that path or packets sent on that path are acknowledged (possibly on another path)\nPresented at IETF-113 but no time for discussion.\nI have submitted PR to resolve this issue according to the above discussions.\nPR was merged but forgot to close issue...\nIn 3.2.3, we have the following paragraph: When more than one path is available, servers shall monitor the arrival of non-probing packets on the available paths. Servers SHOULD stop sending traffic on paths through where no non-probing packet was received in the last 3 path RTTs, but MAY ignore that rule if it would disqualify all available paths. To avoid idle timeout of a path, endpoints can send ack-eliciting packets such as packets containing PING frames Section 19.2 of [QUIC-TRANSPORT] on that path to keep it alive. Sending periodic PING frames also helps prevent middlebox timeout, as discussed in Section 10.1.2 of [QUIC-TRANSPORT]. This paragraph implicitly indicates that we consider a path to be active if we send and receive packets on this path. This definition works well for MPTCP where each data packet triggers an ACK on the same path. In MPQUIC, a server could have two paths that it uses to send data. However, the client might decide to always send the ACKs on only one path (e.g. the lowest delay one). In this case, the two paths are clearly active, but the server could consider the higher delay path to be inactive since it does not receive packets over this path. It could send PING frames to trigger data packets, but then it would need to send one PING every 3 RTT, which seams excessive. I would suggest to rewrite the text as follows (I removed Server, because I think the recommendation should be generic): When more than one path is available, hosts shall monitor both the arrival of non-probing packets and the acknowledgements for the packets sent over each path. Hosts SHOULD stop sending traffic on paths whether either: (a) no non-probing packet was received in the last 3 path RTT or (b) no non-probing packet sent over this path was acknowledged during the last 3 path RTT, but MAY ignore that rule if it would disqualify all available paths. To avoid idle timeout of a path, endpoints can send ack-eliciting packets such as packets containing PING frames Section 19.2 of [QUIC-TRANSPORT] on that path to keep it alive. Sending periodic PING frames also helps prevent middlebox timeout, as discussed in Section 10.1.2 of [QUIC-TRANSPORT].\nThe word \"either\" means that STOP sending becomes true If either (a) is true or (b) is true. If I understand correctly, the actual meaning is to use the word \"both\" here, instead. So stop sending if both (a) no non-probing packets received and (b) no packets was acknowledged.\nYes, I think this is a good clarification and yes \"either\" might be confusing here.\nI commented on the PR. I don't like the idea that receivers would have to stop acknowledging packets in order to signal preference to not use a path. We need to think that a little more.\nSo we have a choice: either say that nodes should regularly send non-probing packets over the path that they want to keep using, or if we don't want that, accept the idea that acks on other paths show continuity, and require \"abandon path\" to terminate a path that is otherwise working, acknowledges packets, etc..\nYes, I think the second option is what we are aiming for with the new text. Stopping ack'ing packets to stop someone sending on a path is wrong and is something you should never do. If you receive a packet successfully you should ack it (also because that would otherwise confuse congestion control). But we have the abandon frame exactly for that case. However, it also seems wrong to me to require sending acks on the same path taen the data is received. So if you decide to always only send ack on one path and all payload data flows into one direction only over multiple paths, the current text would require you to send potentially unnecessary ping packets over each non-ack'ing paths or you would end up closing those paths which also is wrong as everything seems to work fine. So the question is really is, do we want to require pings in this scenario (payload traffic over multiple paths but acks only on one path) in order to ensure that all paths are usable bidirectionally? I think that's unnecessary overhead and believe Olivier's proposed addition is the correct thing to do.\nI can look at what I implemented in picoquic: If there is no traffic in either direction for the idle period, the path is automatically abandoned. If traffic is sent and acked, or traffic is received, the path is not considered idle. This is pretty much what Olivier proposed. If there is a \"timer loss\", and if there is an alternate, the path is put in a low priority list. The node will send ping and try get an ack. If no ack is received after N trials, the path is abandoned. If \"abandon path\" frame is received, the path is abandoned.\nI marked this issue as editorial as we have an editorial PR for it. However, I open a new issue for the per path idle timeout discussion (). Do we need another issue to clarify path closure or sending of keep-alive traffic? However this might be covered in other parts of the draft already...?\nThe modified text looks good.Looks good to meThe text looks good to me."} {"_id":"q-en-oblivious-http-f321a53063316f6029843a4456216be4c46c680fff0aea36c9f518af61cdac6f","text":"Let's be clearer about what it means to proxy here. Also, since it is the same text, talk about TLS assumptions more directly. This will interact poorly with . Sorry about that NAME\nFrom Vienna chat. A proxy is not a generic HTTP intermediary, and therefore is not subject to the same rules.\nFrom NAME Generic proxies would forward unknown HTTP headers between clients and targets. We should be very clear that oblivious proxies are not generic, and should not be doing things like forwarding unknown headers arbitrarily between client and target, or target and client.\nrequests. This includes the Via field, the Forwarded field {{?FORWARDED=RFC7239}}, and any similar information. A client does not depend on the proxy using an authenticated and encrypted connection to the oblivious request resource, only that information about the client not be attached to forwarded requests. Obviously, you need encryption on one or the other side of the proxy, right? Probably this document should require it on the inbound side, but I don't see the text that requires it. Did I just miss it?\nYeah, it's only necessary on the inbound side, per the analysis, though the document assumes it's on both sides."} {"_id":"q-en-oblivious-http-8fd0b0631b62f71a16d19e4c8503a070be8e0260ab51ff69867365ea64fb235e","text":"NAME noted that it was important to note that the Gateway needs to be trusted if gateway != target. As a practical matter, that means that you probably want gateway == target.\nHi, Since the gateways are seeing request/response pairs, there is a potential here for a side-channel attack at the gateway itself or at a relay. (if I understood the draft correctly - if not, please clarify my misunderstanding here). Are there mitigations planned for this type of issue that you could expand on in the Security Considerations section perhaps?\nDoes the new text in address this concern?\nI don't think so, because the claim is the gateway does not have intimate knowledge of the contents of the request/response. If that assertion is true, the gateway shouldn't be able to ascertain information about the actual contents from those pairs (directly or indirectly, via a side-channel attack). If the burden is on the application developer to ensure that request/response pairs traversing the gateway are to be resilient to side-channel attacks of those pairings, then it would be good to explicitly call that out in this spec (perhaps in the security considerations section?) Thoughts?\nNAME the gateway necessarily sees all request and response pairs, and will therefore always learn information about each transaction. This is fundamental to the design. Whether or not the gateway uses that information for malicious purposes, e.g., by providing different responses to different requests for the same target resource, depends on the trust model. And the change in addresses that. I don't think further text is needed here.\n\"Finally, a relay can also generate responses, though it assumed to not be able to examine the content of a request (other than to observe the choice of key identifier, KDF, and AEAD), so it is also assumed that it cannot generate an Encapsulated Response.\" I feel that a relay could certainly perform a side-channel attack here as well using R/R pairings.\nCan you please be more specific here? What type of attack could the relay do to violate client privacy that is not already documented in the draft?\nTypically in HTTPs scenarios, it is more difficult to perform side channel attacks as a flow observer, especially if multiplexing of requests and responses are performed. In this design, if I understand it correctly, the requests and responses are being broken out into distinct exchanges that the relay can see and therefore it makes it easier for a relay to perform these types of attacks that would be more difficult had the relay not been used in the first place. If my understanding of the protocol is incorrect here, please feel free to point that out.\nYou seem to be describing fingerprinting attacks, wherein the observer uses features extracted from otherwise encrypted traffic to try and learn information about the underlying (unencrypted) data. Yes, the relay does have a more fine-grained view of each exchange compared to a network observer in this protocol since it can see explicit message boundaries, and that might impact the efficacy of these attacks. This is partly already addressed in the document: We could extend this to note the relay's view of message boundaries. I don't think any additional text is needed for the gateway.\nWhatever you feel is best. I think some language around the message level risks this creates vs non-relay scenarios might be helpful. Does the client intentionally break things up in a way that crosses the underlying application message boundaries (e.g. more like packetization) with a relay? I have more questions about the gateway functionality that we can discuss separately.\nThis means that the client really needs to trust that the gateway doesn't muck with it. As a practical matter, this incentivizes clients to choose a gateway that is effectively the same entity as the target. They don't have to, but the amount of trust required of the gateway is higher if gateway and target aren't operated by the same entity. We touch on this, but we could be clearer about it."} {"_id":"q-en-oblivious-http-cec08dc75dbdfa9940e8aaaac18bab72ce7a9dee4b76b1b8e217001d251ec31d","text":"I was just going to add something about correlating requests based on their content, but then it seemed like the text was duplicated some. Here is my attempt at making this better. I also expanded the notion of \"configuration\" to include the identity of the relay. That made it a little tricky - we really do want diversity of relays - but it makes more sense to talk about minimum anonymity set size rather than to imply that there can be a single configuration, because that is unrealistic.\nHi, The specification talks about State. Could I suggest that it explicitly mention 'or other user identifiable payloads.' Because things like JSON documents (e.g. a JWT) and SAML assertions are not 'cookies' but can carry information that might be user identifiable. Thoughts?\nI think we've given short shrift to the way in which the content of requests can be used to link them. Perhaps some new text for Section 8...\nNAME that text looks good to me Wanna toss up a PR?"} {"_id":"q-en-oblivious-http-358fa79ffbe6b77e6946689b1e69edcca3ab443bc0c769bf132dd3f69d0b4071","text":"Most of the text was correct in that we talk about making a request of a resource. Where the text refers to making requests of a server, those usages seem to be correct.\nSection 5 details HTTP usage. It seems it's technically correct but the section closes with a sequence of paragraphs that jump back and forth between statements about how an Oblivious Gateway Resource treats requests or responses. Perhaps these could be consolidated to present requests first consistently. I.e. swap the final two paragraphs around\nmnot recently made a PR (now merged) that replaced \"requests of a server\" with \"requests to an origin server\" and that feels both grammatically and technically more correct. However, there are still few cases peppered through the doc that use \"requests of a server\". I'd kindly ask the editors to revisit these and consider if they could be made more consistent with the above"} {"_id":"q-en-oblivious-http-e3d60f055877e613b56bf7d665a750f22fe5c106be158460aab858c900055590","text":"I presume the idea here is to add jitter to the system (different delays for different requests) rather than compound delays on a single request? Even if not, this paragraph could probably benefit from some language fixups. There's a missing word and \"might not be what some clients want\" could benefit from a rephrasing to make it stronger.\nclearer, thanks"} {"_id":"q-en-oblivious-http-24e01fc4958f80b8f50d33be4201b89f64bfbd3f19367dd53090291e4383ce55","text":"This starts by moving the good text from to the \"Errors\" section, where I realized that it duplicated some of that language. It was more complete, so I merged the two. I then added some language about what happens to errors prior to decapsulation. This doesn't define an error signaling format. Though that might be useful, it is also a lot more work and I'm leery of scope creep in this document. My view: if you get an error, you have screwed up somewhere. You probably want to have a human look at what is going on. The privacy consequences of automated responses are difficult to work out properly. Hence this proposed change.\nThis is an improvement but still insufficient with respect to configuration mismatch signaling.\nLet's start with this one and I'll look at addressing the open issue in a follow-up.\nThanks, I think this is clear enough now."} {"_id":"q-en-oblivious-http-03a0f6ef74f128b17909d6d98d7056691b85d4c5f6a64c0db386e5650d44b298","text":"From NAME Interesting that this is the first time OHTTP appear in the text, excluding pseudocode and mediatype - please expand (or define somewhere else, for example in terminology)."} {"_id":"q-en-oblivious-http-44d1df1c595c5dce7cb4a1c81be048127e4e9a14c3c23f5f2d68abfd7b908c3b","text":"Switches to an 8-bit key identifier here as well. This format now doesn't need varints. Yay. Implemented this in my ohttp library and it's fine.\nSplitting this from . It seems likely that a single KEM key will support multiple KDF or AEAD options (NSS will soon support 3 of the former, and 2 of the latter, maybe 3 of each). Having a configuration that includes a single set of (KEM, key, and key identifier) with a list of KDF+AEAD pairs, akin to ECH, would make it easier to support that sort of deployment. Without that, key identifiers need to carry information about the KDF and AEAD. To support this, the format needs to be adjusted to carry the KDF and AEAD identifier. In doing so, we probably want to encode the choice of KDF and AEAD in the key schedule or AAD accordingly.\nShould the key ID length be explicitly authenticated, e.g., by including it in the AAD, or should the AEAD implicitly authenticate things? Unclear!\nShould we even bother with an AAD at all? The best reason is that the keyID determines which key the server uses, which is information that factors into decisions. (A related question might be why doesn't HPKE take variables as input so that things like key identifiers can be integrated into the key schedule.)\nMy take: yes. I need to think through the implications of no additional authentication at all, and in the mean time, I'd be more comfortable with simply authenticating everything possible. I think it was discussed, but it makes the interface a mess. And since applications can accomplish the same thing by shoving what they want authenticated into the AAD, it was voted down.\nYeah, you and I both know that putting stuff in the AAD isn't a real substitute for having it in the key schedule. I completely understand how this would mess with the purity of the design, but TLS managed by wrapping its labels better (which also addresses a known problem with HKDF not being injective).\nFixed by virtue of making the key ID fixed length in ."} {"_id":"q-en-oblivious-http-c96efb698c231a86e2fa28d8d5e69e4a1c9ab59088426c948334ac644fc9483d","text":"This came out of , where we realized that the relay padding is useful, but needs to employ different methods than client or gateway. The relay can't use binary HTTP padding; it has to insert stuff at the HTTP layer. Include citations for padding in those documents. Note that these citations are slightly misleading. Most relays won't have access at the level necessary to add that padding. More likely, the relay will need to do as NAME originally suggested, namely adding fields of an appropriate size. But we don't have good citations for that, so I'm cheaping out. (Note that RFC 9112, Section 7.1.1 talks about using chunk extensions for padding, but I'm going to pretend that doesn't exist; it's even less usable than the HTTP/2 and HTTP/3 options.)\nTake it or leave it!"} {"_id":"q-en-oblivious-http-210fb5ce5a85357bf074c2a110e6a7387044272e67e4365e8726ab5239c8f098","text":"Comment by NAME In How does a client know this? Preconfiguration ? Are we talking ms? sec? minutes? Any advise to implementers ?\nIt can't be less than 2s, because the granularity of the field doesn't permit detecting anything finer than 1s. I've done what I can.\nThis still does not seem to address: Maybe just cut off the “unless” part if there is no signal for this? Paul\nNAME also included URL which I think should address your concern:\nThanks. That commit does address the issue, mostly. I guess it moves it do an unspecified provisioning layer :P Paul"} {"_id":"q-en-oblivious-http-e1fb9c20565c511656960f3c8e3771ee39080f3c0125b26394a22fffa7ecdd52","text":"This isn't anything that special, but this is a case where a bunch of our established defenses don't work.\nThe new changes LGTM. Feel free to merge assuming you're done with edits!\nThis means that servers need anti-replay protections, especially if they are performing non-idempotent operations."} {"_id":"q-en-oblivious-http-419d5af59d28c65e493666e11314734107f1274b5e598c64aeee4715c34ba66a","text":"This was vague enough that an example might help.\nYeah, this seems good. Do we want to explicitly mention cookies?"} {"_id":"q-en-oblivious-http-b95e8f194a071498c04e1cf9adbb983866e2c06a17f12aa58beaad93053b2542","text":"This makes the choice of a 1:1 mapping clear: avoid being used as an open proxy through explicit configuration of your ACLs. cc NAME\nThe proxy has a fixed mapping. It's resource maps to just one request resource. That's a deliberate choice that we should probably be much more up front about.\nYes, I think this makes sense, but it took me a while to realize it when implementing. Having a section on mapping state on the proxy would be useful.\nVittorio says: This relates in particular to the potential for generic services with open destination (not a capability in the proposal) to be used for nefarious purposes (spamming, bot net command and control, hiding illegal activity, etc...)."} {"_id":"q-en-ops-drafts-bb3733e5d0d2b385746a1a91b690ab397fb62f8e534ddb7ec68c01833dc75e1f","text":"(which didn't require any non-editorial changes; the review is done now)\nAlso call out invariants (esp. in header structure and handshake) explicitly: v2 may/will break functionality outside the invariants.\nDocument is currently in in with final invariants; do one last pass to make sure everything documented in Section 17 as expose information appears here (which is basically the same check)."} {"_id":"q-en-ops-drafts-919b0ae5b322ec5d078983148d001e80cd48b0367abb1d26c0e4146816b519b4","text":"This document should mention unidirectional streams Some words about the meaning of an abort signal vs. a clean close would be useful, both ingress and egress. Maximum stream bytes = 2^62-1. Saying \"there are special streams for QUIC\" is inaccurate (it's just one stream now, conceptually), and I would not bother to describe the CRYPTO frames this way.\nAlso: stream aborts are always unidirectional. what can/should applications assume about stream IDs? are they accessible, and does the app have any control over them?"} {"_id":"q-en-ops-drafts-f398ae7b13a301b291c1c9b33d5022d2d54ba2c364b5c066afec10b634d19410","text":"However, we also use the terms Server/Client Hello datagram to cover all QUIC Packets of the first two datagrams sent. This term is not used in the transport draft (anymore) and I find it confusing given the TLS terminology. Should we maybe use a different term?"} {"_id":"q-en-ops-drafts-f8431bf39da861fd9b5419b1f2a93631d7384c52ad51faaf2d8514ce3f3c0b53","text":"addresses\nIn \"Connection ID and Rebinding\": \"Even when encrypted, a scheme could embed the unencrypted length of the Connection ID in the Connection ID itself, instead of remembering it, e.g. by using the first few bits to indicate a certain size of a well-known set of possible sizes with multiple values that indicate the same size but are selected randomly.\" I almost went and filed an issue against quic-lb to do this instead of just encoding a plaintext length like it does currently. But is this a good recommended practice? ISTM that long headers would provide a bit of a Rosetta Stone for this mapping, making it not particularly effective for the complexity.\nThe idea was not to conceal the length but to avoid likability by identifying linked connection based on the existence of a certain length field. This might look slightly more random but I guess you also have to change your mapping more time to time. So not sure. I'm fine to remove that half sentence and maybe add a sentence about likability instead...?\nYes, I would prefer deleting the quoted text, as I think it's a solution that adds complexity for little apparent benefit.\nOkay, will do that change after your PR has landed."} {"_id":"q-en-ops-drafts-750bb2b477faffd7190e56ba9d5b4b398e8681416f4ccc249e0c28f2fc7e529a","text":"Re SNI and ALPN: we can certainly point at the work in progress in TLS that will encrypt this information, but IMO the document should describe the current state of the world. I've just added something pointing to the SNI encryption document; while it seems that the tunneling technique described there could pretty clearly be applied to ALPN fronting as well, is there a reference for encrypting ALPN we can use here that doesn't leave details as an exercise for the reader?\nI would actually prefer to describe the wire image of TLS and associated protocols in a separate document and just point to it because that's actually not only applicable for quic\nI don't have a reference for encrypting ALPN. We don't have one. Maybe I should write one... On the plus side, it's something that benefits from the same techniques as SNI (and less likely to cause deployment issues, though that is only because we haven't ossified around it as much). As for the general point about now vs. later. One of the things we've tried to do is be very clear about what the invariants are, or what promises we are making. It's not the same thing as true ossification defense, but if we make it clear what we are committing to, maybe we won't have people relying on stuff we might want to change."} {"_id":"q-en-ops-drafts-9a1c8d1533468e0cd3a5017c25725a5cef562f06caed30937876a7c4e4eb21a6","text":"The point of is a little odd, because it seems to imply that ALL operations require access to the information that is integrity- or confidentiality-protected. That's clearly not the case, I think, as it is merely pointing out that if you want to modify data, then you have to be an endpoint. I think that this might be better phrased as: Note: This drops the last sentence of the original paragraph. If proxying is out of scope, then it doesn't help to talk about it so much. The point about cooperation should do.\nCreated a PR based on your proposed text but added some of the information about proxies to make it more clear what kind of proxies we are talking about."} {"_id":"q-en-ops-drafts-f6d0a2a2b02a2e0190f17035edfb88373a9935d6bd4423f926498485da135cf5","text":"This is a proposal by Gorry to address issue\nThis works for me.\nShould TAPS be cited as an informational ref to an API that is being discussed by the IETF, and which could apply to QUIC - at least to define the primitives needed in an API, and has quite some insight into protocol racing/fallback as described in applicability?\nI'd be happy to do so ;-) however we avoided in the end to talk to much about specific of the interfaces or needed nobs. So not sure it fits or where to fit it.... where would you want to say something?\nI think the WG should consider writing a short section on this. It's important enough that we have a WG developing relevant specs for how to build APIs and the need for this type of fallback/racing.\nI'm not sure the QUIC WG has enough experience to say anything on the matter. The fallback discussion is about how (Application Mapping + QUIC) could fallback to (alternative application mapping + TCP). For HTTP/3 this isn't a straight fallback - if QUIC fails then a different HTTP version over TCP is required. This has consequences for applications that use HTTP/3 as a substrate (MASQUE, WebTransport) which rely on features only it provides like unreliable DATAGRAM. I think the Applicability spec might need to say more here. What I'm trying to highlight is that there isn't a straight shootout between TCP and QUIC. If TAPS frames things like that then I don't know that it helps Applicability.\ntaps allows you to use the same interface for multiple transports and also provides you a way to state requirements (hard or soft) to restricts the selection of possible transports. In that sense for some application TCP and QUIC might be not make a real difference as long as all features requested are support, e.g reliability and in-order delivery. Still not sure what/if we want to say something in applicability statement...?\nNAME we were trying to not focus on http in this document. But I do agree that the fact that there is a different version needed for http to fallback could still be interesting. How about adding as sentence like the add the end of the first paragraph in section 2: OLD In the case of HTTP, this fallback is TLS 1.3 over TCP. NEW In the case of HTTP, this fallback is TLS 1.3 over TCP. Note that for HTTP a fallback to TCP also requires the use of a different version of HTTP, as HTTP/3 has been optimised for use with QUIC, e.g., by removing multiplexing from the HTTP layer and used the provided mechanism within QUIC. We could even add something more like: \"Other applications that are optimised for QUIC might need similar adaption in order to be able to function on TCP.\" However, not sure that adds much... What do you think?\nI'm still not sure that helps. And I'm trying not to derail Gorry's OP. The main grounding I have in my understanding is HTTP. But I appreciate the Applicability document is general purpose, so apologies. Many HTTP applications operate at the semantic level. HTTP semantics are version-independent and get profiled into versions that have a wire-syntax, which depends on different properties of a transport. This could be an optimised use of QUIC features, or it could be an outright requirement for those features. There isn't an interoperable notion of \"I want to do HTTP/1.1 over a reliable in-order transport, TAPS selected QUIC for me, great I'll use that\" nor \"I want to do HTTP/3 over a reliable, multiplexed, in-order transport, TAPS selected SCTP for me\". The transport fallback behaviour for HTTP semantics is quite straightforward. Pick a different HTTP version. This could lend itself to a race like scenario. However, there are applications that use HTTP as a substrate and things like MASQUE and WebTransport place requirements on the use of features only available in HTTP/3 - I doubt anyone will try to backport features of QUIC to TCP in order to satisfy the requirements of layers above.\nI was thinking of something like: /The IETF TAPS specifications [URL] describe a system in which multiple protocols can be provided below a common API. and describe some of the implications of fall back, which specifically precludes fallback to insecure protocols or to weaker versions of secure protocols. / Perhaps inserted after this: /These applications must operate, perhaps with impaired functionality, in the absence of features provided by QUIC not present in the fallback protocol./ ... and before mentioning fallback to TLS over TCP.\nNAME you are right that QUIC provides features that TCP doesn't provide and if you use those features you cannot just straight-away replace one with the other. However, a simple one-to-one replacement not implicated when talking about falling back. Rather than trying to back-port features to TCP, what you need to do is implement that feature in your application instead (as H2 does for multiplexing). But this is also true if you try to use TCP and fall back to UDP but need reliability; then you have to implement it in the application layer. I was trying to be brief in my proposed text and maybe the word \"optimise\" is not the right one. We can of course say more. Do you want to give it a try and propose some text if you think it would be helpful to say more?\nThanks. I'll have a think on some text. The important thing is that H2 wasn't designed as a fallback for H3, it is its own thing. There are few features that an application desires that it couldn't also achieve by just using HTTP/1.1 and multiple connections for multiplexing.\nMay I ask what these features are? E.g., the ability to send unreliable datagrams should not matter in the best effort Internet (reliable transfer, when not needed, is just slower). In RFC 8923, we made an effort to identify exactly what MUST be exposed to an application because there's no such fall back. This covers TCP, MPTCP, UDP, UDP-Lite, SCTP and LEDBAT; alas, it excludes QUIC, since this work pre-dates its finalization (by far, despite the date: it spent a long time in the RFC Editor queue). So now I'm curious what QUIC adds to this list.\nmainly multiplexing I would say but that probably covered by SCTP\nThe main feature that QUIC provides to the HTTP substrate is that it is a secure transport that can work on the Internet and is easily integrated with WebPKI. These are meta-features in a way. Having parity of the deployment and security models allows more straightforward rationalisation when designing applications and e.g. Web APIs on top of this substrate.\nThanks! About muxing: is a certain prioritization behavior guaranteed (not just a wish) when muxing with QUIC? SCTP could do that but we don't currently give such guarantees in the TAPS interface. About security: right... that's what I expected. TAPS isn't going to fall back to a less secure version of what you wish for... so I guess \"mostly encrypted header\" isn't a part of any API but as you say it's a meta-feature - it's implicit in the choice of QUIC, and when that's desired, falling back wouldn't happen, a TAPS system should always only give you QUIC. I think what's missing is a QUIC API that makes these wishes explicit...\nPersonally I'd say, for an HTTP application, that QUIC packet header protection is less of a concern. More that QUIC provides transport security on-par with TCP+TLS. Often an application can use the same TLS and crypto libraries for both. Fallback from QUIC to TCP can be a downgrade. But an application that does this can (and should) be aware of the security properties of the transport it is using. With HTTP, imagine a client that connects to URL and receives . This advertises the availability of QUIC and HTTP/3. If that transport connection fails for some reason, then there is not a viable fallback to TCP+TLS. This is an unlikely scenario but the specs allow for it to happen.\nI would also have thought that encrypting data is more important - but surely having fall-back \"under the hood\" and just being aware of what you got (this can be queried in TAPS) is nicer than having to take care of it in the application. Fall-backs can get complex: e.g., when multi-homing, considering IPv6 ... a TAPS system can have a rich set to choose from and implement it all efficiently while shielding applications from this trouble - the application is still in control of things and can express limitations, as well as query what it got in case it did give the system some freedom. This HTTP case of preventing a fall-back... well, it is what it is :) No fall-back in this case, clearly.\nI remain to be convinced that TAPS API will address the way that applications (I am familiar with) intend to use and interact with QUIC.\nI think we're drifting towards a discussion of should a mapping document be written for TAPS to QUIC... which is a different topic. My observation was only that TAPS architecture is something the IETF is working upon, and it has documents that describe how to make an API that with functions and racing/fallback in a very different way to sockets. The guidance about what is an acceptable \"alternative\" connection appear common, at least enough to say there is a similarity and cite it.\nThe suggestion for a short informational reference seems good to me.\nThere is already PR\nWe merged PR . Is there anything we want to add or can this issue be closed?\nThe merged PR addresses all of my concerns. I'll let NAME be the judge whether his original issue has been addressed.\nThat looked good to me.\nThanks all!\no/ lgtm"} {"_id":"q-en-ops-drafts-43b521c52ea7bc1b2403aed7dc6d3abd2dffbaa1c86010ee1af056cf517a181b","text":"Thanks to Mike for noticing this.\nThis is true, but there is nothing inherent to the design that prevents RESETSTREAM (and STOPSENDING) from using a different code space from CONNECTION_CLOSE. Though we have the policy in HTTP/3 that any stream error can become a connection error, the same isn't necessarily true of other protocols. Maybe we cab just avoid mentioning this possibility..,\nI don't think I have a strong opinion on removing or keeping this sentence. Would it be any better if we say: \"Application error codes are recommended to be defined from a single space that applies to all three frame types.\" ? Or is there any other guidance we can or want to give to application designer about the use of error codes?"} {"_id":"q-en-ops-drafts-1850e9eb0ef232a0d9872d4655e606af7e57319502b9d6d53de285f0d670c106","text":"proposed by NAME\nPlan is to update the PR to point to the IANA registry instead.\nAdded a ref to the iana registration sec in the transport draft. Should be ready to go now!"} {"_id":"q-en-ops-drafts-4bf3739be0a0e8d9a8edc0d94a67ebdd366c49744a5e3d7df506b80eccb7e123","text":"This mainly wiggles the existing text around and tries to fully qualify transport vs application to help explain the ways an implementation might do things, which is independent of how an application protocol might be designed (for example H3 doesn't require incremental processing but most people come to a realizatopm that it is the best way to do things).\nThe new text is good and clear, but it doesn't really explain that the when credit is released at the receiver, could we say a little more: it takes 1/2 RTT for cerdit to reach the sender, that causes new data to be released, for a large RTT, this means that credit has to be sufficient to avoid blocking or to be sent promptly, otherwise transfers will stall for periods of an RTT. (I know TCP does this stuff for the app and grows the window, but QUIC I suspect needs something to happen to release the credit). I know people closer to the Apps can write this sentence better than me; from my perspective I just see the effects when an app blocks or release credit in big chunks... over a large path RTT and the network performance suffers.\nThanks Gorry. I think I agree. How about I tweak \"Timely updates can improve performance\" paragraph to say \"Timing of updates can affect performance\". I can add a mention that the extension of credits on the data recever side is enacted by emitting MAXDATA/MAXSTREAM_DATA frames, which take at least a 1/2 RTT to reach the data sender side. I think the useful additional applicability statement to be made based on the above is that the data consumer read rate may be disjoint from the delivery rate and responsiveness of signalling channel back. For a highly asymmetric link, the download rate can be application-limited well below the theoretical bandwidth simply because an application is spending too much share of the time reading packets and not giving enough time to emit QUIC packets. Does that sound ok, or am I missing your ask?\nThis is definitely heading in the direction I was hoping, it helps avoid people developing on a LAN and then discovering the effects on other paths later. Please add something.\nI've reworked the paragraph with my attempt to address NAME comment. I've introduced the terms uplink and downlink, which I'm not totally happy with. If someone has a better suggestion I'm all ears.\nIssue: Sect 4.3. Flow Control - we should have a subsection also encouraging timely credits This ID discusses flow control deadlock, whichI think is important. I suggest we should also note that timely release of flow control credit is also important (QUIC transport has some text to point to). This need is evident when the path has a large RTT, and there can be a large BDP, where the whole of operation of QUIC can be dominated by the way in which the application manages the flow credit. This might not be obvious, because the equivalent rwnd update in TCP is handled by the transport rather than the application.\nThe point is the audience for this draft is applications that use a quic library. I would assume that the actual credit handling is performed in the transport/in quic directly, usually without an interface for the application to directly impact it. Do you propose the application should have more direct impact on flow credit handling? I guess we can add a pointer to the right section in the transport draft but not sure we should/can say more here...?\nI agree this seems mostly like a QUIC transport/library issue and I wouldn't expect the application to have much control here. I can imagine applications setting a max value?\nI generally agree with Mirja and Ian. However, on a closer inspection of this section I think it would benefit from a crisper definition of what it means to read data vs. process data. Its a reasonable application mapping implementation design to read data out of the transport in order to release flow control but still buffer it (or just dropping it on the floor) in application space before processing that as application data.\nNAME can you propose text?\nI'm content that flow control is something the libraries need to get correct - the TCP people spent energy to get this automated, although QUIC's multi-streaming makes the trickier. Lucas text suggestion would improve things by making it clearer how the API is intended to be used.\nNAME ?\nI will add some text (just need to find the time)."} {"_id":"q-en-ops-drafts-589a0957ccb1732fea34bd1d8f57b83c0d536ebda480ffc01ac08e34daa19aea","text":"Based on the discussion in webtrans at IETF-110, I guess we could be more explicit that QUIC streams do not guarantee to be received in any particular order."} {"_id":"q-en-ops-drafts-3a81ccd37a0c0c435da595eb5aae493f73e55ebcd1def89d284e60cde2e196d8","text":"It could be worth noting two general further things about DSCPs : (a) \"When multiplexing multiple flows over a QUIC connection, the DSCP value selected should be the one associated with the highest priority requested for all multiplexed flows.\" (b) \"If a packet enters a network segment that does not support the DSCP value, this could result in the connection not receiving the network treatment it expects. The DSCP value in this packet could also be remarked as the packet travels along the network path.\"\nA sample PR: \"More aspects of setting DSCPs\""} {"_id":"q-en-ops-drafts-2e1f9de0bdd748a893daeea94d005a9b12e2bb1f1e569e3e9cb7249466df6c08","text":"Separates for manageability and this PR for applicability\nI actually find this sentence weak: \"Therefore it is not recommended to use different DiffServ Code Points (DSCPs) {{?RFC2475}} for ... Considering that QUIC v1 as defined has a single congestion controller and recovery handler and don't treat different DSCP values as separate transport flows I would think that this should be stronger formulated. However, I could see that one could provide a bit more context to this DSCP and single QUIC connections vs multiple QUIC streams discussion by referencing Section 5.1 of URL and be explicit about the point that QUIC v1 is similar to SCTP in regards to the issues raised.\nI think the new text do work."} {"_id":"q-en-ops-drafts-a48cfe9367530312e7cf0aee9273d7ef1b10e038fd53dfe6e924e0f208042500","text":"Suggesting a rewording to clarify various issues and correct some mistakes.\nPreviously scope of this text was agreed, but what appeared seems to me to have various NiTs: • Saying \"every other packet is acknowledged\" is a recommendation in RC9000. I suggest citing also. • The text seems to conflate endpoint processing optimisation, and network path performance optimisation: /constraint/ wasn't the correct word, but even /constrained/ was a value judgment on design traedoffs for any mobile/RF link - and many bandwidth-limited cabled link, which could be avoided by simply saying what is intended and saying either /which can impact performance of some types of network/ or /which can impact performance across some types of network/ ... depending on your viewpoint as an operator or application. • I think we can do better than URL, suggesting removing some and leaving might. This might be resolved in a PR ."} {"_id":"q-en-ops-drafts-051284e58e22ca53e878bd1799f2bd37611c9a00ffc5ccd641e6a1eb79570da1","text":"One sentences added to address remaining bits for\nNAME I didn't replace \"five-tuple\" with \"connection\" in this sentence in this PR: \"QUIC’s stream multiplexing feature allows applications to run multiple streams over a single connection, without head-of-line blocking between streams, associated at a point in time with a single five-tuple. \" because it already has connection in the sentence; so we could only remove the last half sentence, but I actually think there is a value in leaving this in because some application may actually care about the 5-tuple.\nWFM. We should probably do an editorial pass to make sure we're using terms like \"5-tuple\" and \"connection\" and \"path\" properly everywhere anyway.\nSee also section on Streams in transport draft: \"Stream offsets allow for the octets on a stream to be placed in order. An endpoint MUST be capable of delivering data received on a stream in order. Implementations MAY choose to offer the ability to deliver data out of order. There is no means of ensuring ordering between octets on different streams.\"\nSee PR\nlgtm modulo editorial. circleci doesn't like this but can fix after it lands."} {"_id":"q-en-oscore-groupcomm-74adb429149a690b38ad9cc75f83eedad8ef4cbf042fad4433d745d4ab4a7446","text":"NAME Good point about being more explicit. I actually think is valuable to reference an existing encoding of public keys providing also key type and curve, like CREDX in URL Also, there is a discussion about format for public keys in COSE, which if it is specified probably will be referenced by EDHOC instead of CREDX. I would be good if there is a common format used in all these instances. For the time being I propose that we reference CRED_X with empty text string subject name, and keep track of the COSE activity.\nWe should consider replace the NIST reference with SECG [1] to harmonize with EDHOC [2]. [1] URL [2] URL (edit) ... but the commits in [3] mainly refers to the NIST spec. so I propose we keep it as is for the time being. [2] URL\nIt looks good to me; we can give more details on one point. For the two HKDF invocations in Section 2.3.1, we can define more explicitly how the public keys are expressed when concatenated to form IKM-Sender and IKM-Recipient. Given a public key, that can simply be: For OKP signature keys, the compressed 'x' coordinate, i.e. what would be expressed in the key parameter -2 of a corresponding COSE Key. For EC2 signature keys, the 'x' coordinate followed by the 'y' coordinate in this order, i.e. what would be expressed in the key parameters -2 and -3 of a corresponding COSE Key. This should be sufficient, i.e. it shouldn't be needed to have a more structured encoding of public keys providing also key type and curve, e.g. like done for CRED_X in URL"} {"_id":"q-en-oscore-57bffe0f05cac5a14b4e84a11d807af1d39a14a4250492104228192f8c9c36f2","text":"Back to one processing description Observe is now primarily inner Simplified processing Explicit cancellation by intermediary not supported"} {"_id":"q-en-oscore-3dfa536703e97ec8fe6c2fe9bfeef92e160532686cc694247d93d6475efa2969","text":"I don't know why your commit got included.\nI think I do: I modified the commit in order to add the description (to close issues). Now there are 2 identical commits, one is the one above ( 30475e3) that does not close issues, the other is the one in the master branch (7d4cbc2)\nThanks for your contribution Jim"} {"_id":"q-en-perc-wg-42b3cc92d43a023727a4277cb7d69ce8f8dab1c6889acc0b219f64b2ebc96d83","text":"Replaced Media with Key distributor in on important place. Replaced the DTLS tunnel with TLS according to the latest tunnel draft. A couple of more wording fixes."} {"_id":"q-en-perc-wg-142327eae773b6ea0c5d424a18696b25150f99d9c4832e8406de30313981f7c8","text":"This is much more compatible with extensibility patterns in TLS stacks, and in a way that is very forward-compatible with the forthcoming DTLS 1.3, while still being implementable in pre-1.3 stacks. DTLS 1.3 has a notion of post-handshake handshake messages, like and , which are acknoweldged with an message. The framework described here simply sends as yet another post-handshake message. Depends on\nr? NAME\nI have a bunch of comments but really I could live with this like you have it now."} {"_id":"q-en-perc-wg-339d4e9fc518c8ef82d8c947883adfccf3a4c61ff8a52d5a88c9a06913b67556","text":"We're talking about the key wrap function here. Of course there's a requirement to remove padding, but it's explicitly clear in the AES Key Wrap RFCs. Seems like this text might be wrong now.\nThinking about this a bit, I think I will redo to just make it so decrypt has to remove the pad - I think this is way hold historic text that makes no sense given we use key wrap\nSection 2.3 The decryption function returns a plaintext value P that is at least M bytes long, or returns an indication that the decryption operation failed because the ciphertext was invalid (i.e. it was not generated by the encryption of plaintext with the key K). These functions have the property that D(K, E(K, P)) = ( P concatenated with optional padding) for all values of K and P. Each cipher also has a limit T on the number of times that it can be used with any fixed key value. The EKTKey MUST NOT be used more that T times.\nI think the goal was to avoid having a length indicator in the encrypted data as the stuff outside this knows the size (and thus how much pad to remove). Thoughts on what we should say here in the draft ? NAME NAME\nshould we clarify, how the code knows the length of the padding on the decryption side ?\nI'm unsure what the question is. I reviewed that section of the current draft and it sounds clear. Perhaps the confusion is over the padding? The AES Key Wrap logic defines how to insert and remove padding. You can see how that's implemented here: URL There is a \"with padding\" function that inserts required padding per the RFC. So, what's the question?\nNAME .. I think the question is on clarifying when the padding gets added and removed. Is it out of enc/dec or inside. We might just need to clarify tht\nIt's defined in the RFC. I don't think we should be trying to explain it again here."} {"_id":"q-en-qlog-f924ea51ea533ca4850162bd58f64df1869422c54732c3f34d302dce483a8f93","text":"NAME asked how to best approach logging from a server's perspective, given that things like version negotiation and stateless retry are not inherently tied to a single connection. We should add some informative guidance on how to best approach this to the spec, depending on how much state you're willing to keep around Some options: low state: keep a separate qlog file for the entire server. This logs vneg, retry, etc.. Then, when a connection is truly accepted, start a new .qlog for the individual connection, containing all events thereafter. The URL can then also contain an event signalling the acceptance of a new connection for later cross-linking between the files. low state: keep a single huge qlog file for everything, using the \"groupid\" field to allow later de-multiplexing into separate connections (I believe quant does this atm) stateful: if you already track vneg/retry and link them up with the final connection, you can output them in the per-connection qlog file as well Maybe also shortly talk about some of the trade-offs in each option. Also talk about how to approach server-level events like serverlistening and packet_dropped in separate scenarios.\nI lean on the side of minimally and clearly identifying the challenges and maybe signpost some features and leave it up to implementers to figure out how to address this. They can always push back later in the specification lifecycle is something is deemed missing."} {"_id":"q-en-quic-v2-554fc2d8b29759fe6a69cacb5f81ee1faee7c422352d0c0496e5c39682f9571c","text":"Also, add structure to the \"changes from v1\" section.\nIt would be a simple matter to switch around the packet type codes in long headers, e.g. 0x0 = Retry 0x1 = Handshake 0x2 = 0-RTT 0x3 = Initial so no one ossifies on 0x0 = Initial.\nI'm ambivalent on this. It's fairly trivial to have the long header parsed in version-specific fashion, so it's easy to implement, but I'm not sure how much value it has. It also makes reading packets slightly harder."} {"_id":"q-en-quic-v2-2ff77b0e937e85671e28075fb789fa90744875794f0f5cbcb14e84fca0895fe0","text":"The correct term from the VN document would be \"original version\" but just removing the word initial avoids the confusion\nFewer words means fewer nits"} {"_id":"q-en-quicwg-base-drafts-bdaa428f75602673a7c8d269c27d55d24c300c14c375786e654248ecfa77c421","text":"In we agreed on this. Considering that the decision affects handling of multiple frames, I think it's worth clarifying the general rule in the draft. For discussions regarding specific frames, please refer: MAXSTREAMS, MAXDATA, MAXSTREAMDATA - RESETSTREAM, STOPSENDING - ACK -\nIf you want to be explicit, perhaps also mention that stream ranges can be retransmitted with different boundaries or partially, if this isn't already covered by some existing text.\nNAME While that's possible, I think we should better handle retransmission rules of specific frames in separate PRs. The reason I suggest this having this catch-all sentence is because to me it seems that the concept is sometimes being lost.\nI agree that I'd like to discourage this a bit more than this PR does.\nNAME NAME Your comments make sense to me. I'll update the PR.\nMoved to 13.2, expanding the text to encourage reassembling frames containing up-to-date information. PTAL.\nThis is implicitly assumed, and, as NAME points out in , not doing so can cause undesired outcomes. There should be normative text in the transport draft requiring this behavior.\nSo the situation is that if one so far have ACKed: 1,2,3,6,7. Then if 4,5 shows up at the receiver and it is going to ACK those, it must at least send an ACK that contains 4,5 and 7, thus likely acking 4-7? It should not be allowed to send an ACK saying 4, 5 in the above situation as that doesn't indicate the receiver total progress, i.e. all the way to 7? I think requiring Largest Acknowledge to always indicate highest solves some issues and likely simplifies a lot of implementations. But it may result in larger ACK frames as one will have include the full current state between the largest Acknowledged and so far into the past PNs that you need to go. In severe re-ordering cases that could be a lot. Will this cause cause the ACK to be to large to include? Especially maybe in connections with a large delay bandwidth product going full steam encountering a serious hick up in the network. For example if an ACK is required to ack packet 4000 + 1E9 and you have 500 gaps that has arrived the last RTT/4 and really should be included. Resulting in an ACK frame that is at least 1000 bytes large, possibly bigger.\nThere are a few things this precludes, and we we should understand those before doing this. 1) A very limited implementation(ie: hardware) can't always send only one ACK block without giving up on reordered packets. I don't have such an implementation, but at some point I remember discussing this idea with people who seemed interested in it. 2) One cannot do multipath with QUIC v1 and it means the multipath design will have to use a PN space per path, as described in Christian's draft: URL I think that's likely the right choice for other reasons, but I'm hesitant to make that decision now. 3) An RTT sample can only be taken based on the largest acked. Therefore, if there is a large amount of reordering(ie: 1RTT), you don't get any RTT samples for an RTT. If we care about this, we can probably solve it a different way in the recovery draft, but that likely means using ack delay sometimes and not others. In regards to Martin's comment, I thought we already had text that said ACKs with lower packet numbers than the largest ACK packet number should be ignored, since they're old?\nNAME regarding your question. \"In regards to Martin's comment, I thought we already had text that said ACKs with lower packet numbers than the largest ACK packet number should be ignored, since they're old?\" At least when it comes to ECN, the editors have been working hard to reformulate which has resulted in that there is nothing that applies this from the PN increasing for the ACKs. So to solve the ECN part, I think (re-)introducing that aspect into the text is the simpliest fix for ECN. But there clearly are other aspects here.\nNAME The problem doesn't go away if you ignore acks that are lower than largest_acked. The problem is that the receiver sent the acks in the wrong order because packets were received in the wrong order, but ECN counts are going up. But you raise good points. This is a corner case. I think that with the current spec, under these conditions, ECN gets turned off since validation fails. I think that's a reasonable outcome, and I'm fine with it. NAME\nECN Validation as I currently understand it will not fail if the sender to receiver reordering happens as long as the ACKs from the receiver to sender are not reordered. So to my understanding what is missing is a requirement for the sender to no process verfication for out of order ACKs. It will not fail if the largest acknowledge is not increasing as the comparison is done on number of newly ACKed packets versus counts change. The current sentence in Section 13.3.2 (-17): \"Counts MUST NOT be verified if the ACK frame does not increase the largest received packet number at the endpoint.\" Which I interpret to mean that what PN the ACK reference (sender-receiver direction), rather than the PN of the receiver to sender direction. Can NAME or NAME fix that?\nI think is enough to address the original problem, so I'd like to close this issue. I think and are enough. NAME sound ok?\nYes, the ECN related issue is resolved. What I don't know if it is resolved fully is the implementation requirements is that a sender MUST be capable of handling an ACK frame with Largest Acknowledged that is smaller than highest received? From my perspective there are good and valid arguments why this may be required, however I think consensus needs to be established on that.\nNAME I am not sure if I'm following the context, but my understanding is that a sender MUST be capable of that, because we allow an endpoint to simply retransmit the payload of a packet that is being deemed lost; see .\nJust a stray thought: is this requirement really strictly needed? It would complicate trasnmitters that split work into multiple processing units which might not be fully synced on the latest ACK'ed.\nNAME what NAME says. Also, a sender needs to be able to simply deal with reordering on the ack path. Did you mean something else?\nMy concern was primarily about the clearness of this behavior in the specification text. In transport I find the following which indicates this but are not explicit. : To limit receiver state or the size of ACK frames, a receiver MAY limit the number of ACK Ranges it sends. : Largest Acknowledged: A variable-length integer representing the largest packet number the peer is acknowledging; this is usually the largest packet number that the peer has received prior to generating the ACK frame. I think there are strong hints, with the above \"usually the largest\" indicating that it is not always. I would have made this explicit in 19.3, but then I prefer being more explicit than not. I think it is possible interop issue point. Anywhere else this is more explicit? (I have read github master version) NAME I think there are strong points for allowing an ACK frame to acknowledge only a part of the range between highest received and confirmed to be have been ACKed. The two main reasons I see. high rate flows suffering an packet loss burst so that the number of ACK ranges becomes to large to fit a single QUIC packet. Low complexity receivers ACKing a single range of received packets and doing this for reordered packets.\nNAME As suggested by NAME in , might be the correct place. I am advocating for describing the design principle the PR, but we might (or might not) be interested in expanding how ACK could look like.\nSo results in certain implications, that an ACK could arrive that addresses older state, thus resulting in acknowledging some PNs that was previously unacknowledged. As 13.2 is about retransmissions, I think there are other desing principle aspects of ACKs that probably should be covered, but then earlier in 13.1.\nWe're in the process of making our implementation accept smaller largest_acked values, so I'm wonder if we're any closer to consensus(one way or the other) on this issue?\nI think this is mostly addressed and what's left is OBE. There's text in the draft now that says: \"Processing counts out of order can result in verification failure. An endpoint SHOULD NOT perform this verification if the ACK frame is received in a packet with packet number lower than a previously received ACK frame. Verifying based on ACK frames that arrive out of order can result in disabling ECN unnecessarily.\" This would be adequate, but there's other text (thanks to ) that says old ACKs can be resent. I think we simply add a note about that in the para above, which should be adequate.\nDiscussed in London: NAME to prepare a PR for those remaining bits\nNAME any progress here?\nI think the conclusion here is close with no further action, but it'd be good for NAME to confirm.\nI've added , which I believe addresses what's left of this issue.\nNote that we agreed that proposal-ready would be used when there were approvals for PRs (or at least apparent agreement, as determined by editors).\nNAME : Yeah, my bad.\nThis gets fixed by , so marking this issue as well.\nFixed by ."} {"_id":"q-en-quicwg-base-drafts-3c2a751085dc998b9d2ce5be3fcc249352fb0ba7543d2688837f910f89bdda0b","text":"(Note that the issue identifies two issues, but one was already fixed in .)\nIn , NAME correctly observed that I messed up , but he closed his PR and I'm not able to reopen it without more git wizardry than I'm interested in practicing today. Easier just to fix it in my own branch.\nClients don't send GOAWAY and when a client sends one, the server treats it as HTTPUNEXPECTEDFRAME. By contrast, servers don't send MAXPUSHID but the client treats receiving one as HTTPMALFORMEDFRAME. By contrast, if GOAWAY is sent on !control it's HTTPUNEXPECTEDFRAME, but if MAXPUSHID is sent on !control, it's HTTPWRONGSTREAM. I think the answers here are HTTPUNEXPECTEDFRAME and HTTPWRONGSTREAM respectively, but we should be consistent.\nI think you're correct. There are some situations where newer errors would be more appropriate."} {"_id":"q-en-quicwg-base-drafts-3cbc90d334344cb0c2a13bd676f2eb8cd8237bbecec02927f2244670029a8ae0","text":"I find this pretty hard to read, but assuming I am reading it corectly, it basically says that if you read any packet or write any packet that has a frame other than ACK or padding, then you reset the timer. The text about \"other packet has not been acknowledged or lost\" doesn't make sense unless you have some very complicated timer scheme, because the packet won't be acknowledge or lost until after it is sent, at which point I have already reset the timer.\nYeah, this is opaque. I think that the logic is something like: when receiving a packet, restart the idle timer when sending an ACK-eliciting packet. if you haven't sent other ACK-eliciting packets since the idle timer was started, restart the idle timer That is, if you send 10 packets, only the first of those restarts the timer. This clearly needs to be reworded.\nI'd suggest one change to the second bullet, Martin. \"when sending an ACK-eliciting packet. if you haven't sent other ACK-eliciting packets since the idle timer was restarted due to a received packet, restart the idle timer\" And yes, that text is not clear.\nI like the look of the wording change in ac59e11. Is there something blocking it from being applied?\nNothing. I just neglected to open the pull request. That happens sometimes. Lots of changes in flight...\nI think this looks good, but NAME has more familiarity with this code than I do, so I'm adding him as well."} {"_id":"q-en-quicwg-base-drafts-ef4a3e84aa7558333f920e9b0923eee8eb6091f8e6af7a59ccf0089b3560825f","text":"In the recovery draft, RTT is briefly spelled out in section 5 but is used in several earlier sections. SRTT and PTO are not explained at all, it seems. TLP is spelled out. RTO has a section of its own but it referenced before that. The recovery doc cannot be a tutorial on transmission theory, but it would be helpful to list the key acronyms and possibly reference an RFC with more details. There are already sections summarising key variables in pseudocode. There are references at the end, but the reader would have to go through them all.\nYes, thanks for the note. I'm planning to do a cleanup of that doc soon.\nI fixed defined RTT earlier in ."} {"_id":"q-en-quicwg-base-drafts-c12ed84ed5610d7777f48452de5ef78a61b253dc3fe1d4c8fff5056dc7ee35d4","text":"While we allow the consumer to start using new CIDs anytime, there are consequences of consuming them excessively, as discussed in starting from this comment. The text clarifies that excessive consumption is not what the issuer needs to support.\nNAME For HTTP3, it is natural to assume that the connections would not survive for very long time. In such case, capping the number of total connection IDs is a practical countermeasure against excessive issuance of CIDs. Implementations need not to be as complicated as capping the frequency.\nI don't want to put this in the draft. It tells endpoints not to change connection IDs based on some unspecified (scary!) constraints that may or may not be present in a peer. I realize that there might be a limited supply in some implementations, but artificial constraints on the rate of use is not the way I would address the problem. Refusing or slowing issuance of new connection IDs if someone retires too many is a better approach. In other words, implementations should deal with this on their own.\nNAME If that's the concern, do you think clarifying that the issue can \"refuse or slow issuance of new connection IDs\" in section 5.1.1 would be fine? The problem I have with current text is that section 5.1.1 only says that the issuer SHOULD issue new CIDs as the consumer retires them. I think we need to clarify that doing that without limitation could be dangerous.\nNAME Moved the clarification to the issuer side. PTAL.\nOne low-level observation: CID's need to be looked up in a hash table or similar. For a busy server multiplexing many connections, it want this table in fast cache, also to reject invalid CID's fast. You may be able to handle 5000 connections efficiently, but if you are required to handle 8x5000 this can degrade performance, especially on rejection. You can counter this by cryptographically mapping multiple CID's a single internal CID but that is not the simplest approach. Therefore, choosing a minimum number of CID's should be done with care. Would 2 or 3 not be sufficient for most use cases?\nNAME > Unfortunately, this is a design change, and a new one. just to be clear, are you referring to my comment on a limit of eight, or something earlier?\nNAME I think we should have a separate discussion around a sensible default. Part of the idea here was to make sure that we were very unlikely to run out, even around lost/retransmitted packets on different paths. Choosing a very low number brings with it other potential issues which probably should hold up this change?\nNAME yes, I realised this relates to maxconnectionids.\nI'm going with editorial then.\nWhile I can see why you'd limit the frequency of issuing new connection IDs, I don't think you should limit the number of total connection IDs per connection.Unfortunately, this is a design change, and a new one. From an abundance of caution, we'll need an issue for this, and chair assessment of consensus.I'm not convinced this is a design change -- there was previously a SHOULD, and this adds additional text (including a MAY) around when you might choose to violate the SHOULD. However, no mandatory behavior is added, changed, or removed.A few nits, but this looks good to me. That said, this issue will need consensus."} {"_id":"q-en-quicwg-base-drafts-990675fc91f1c8c8b44006481ddc561b3d57b3e7364fb22bf377c058d5b3c723","text":"I suspect these are uncontroversial.\nAll three Mart[ie]ns now agree, no further objections allowed.\nLGTM, with the one change altered."} {"_id":"q-en-quicwg-base-drafts-f06d55dc2a101bd8b844852868c145f3b92c4fe98ba23e09080bf59e69051c71","text":"Another attempt to group related concepts in this section which hopefully (This is technically design, because previously-required server behavior is now only recommended.)\nWhy do I have to explicitly cancel after GOAWAY? Given that GOAWAY already tells the client this, what's the point?\nGOAWAY tells the client that those requests aren't going to be serviced, but the streams will continue to exist. If the cleanup period is significant, having the streams open is a bad idea. However, attaching SHOULD to this is unnecessary, I think that we might instead just note that clients are free to do as they please with streams that won't be answered.\nDiscussed in Tokyo; this doesn't need to be required server behavior, but recommended (or RECOMMENDED) as a method to clean up state. Should have a reminder cross-reference to request rejection.\nLgtm now"} {"_id":"q-en-quicwg-base-drafts-d439329a9e20b3db9571ba4cf2d6a0761a8b10189dd18c5ed2bb2d6008df6856","text":"I believe UNEXPECTED_FRAME is the preferred type.\nI actually prefer the wording in duplicate push section and wonder if we should port that to PUSH_PROMISE too"} {"_id":"q-en-quicwg-base-drafts-397ac08677d02b86c7918ebbcaede8a301d486ab30e0c8ce169bf9581f55880a","text":"Builds on .\nInitial packets need to be in 1200 byte datagrams for DoS amplification reasons. We have an exception for the case where an ACK for a Handshake packet has been received from the peer. In that case, we remove the requirement to pad. That condition is extraordinarily obscure, especially if we are suggesting that endpoints not send any more Initial packets once they have Handshake keys. Removing this text is the right answer.\n+1 to the proposed resolution."} {"_id":"q-en-quicwg-base-drafts-16d278bf23957690cca64332ff4a627e7296d0f24fea631fb36dcef87136f91d","text":"I realise this may be a bit out of context for this PR, but I can't find any clear definition of when the handshake is complete other than this being magically spawned from the inner workings of TLS.\nPrevious discussions have established the CFIN is HoL blocking, partially because even though it's not strictly necessary, if client auth is in use, it's critical to receive the client's entire Handshake flight before the server processes 1RTT packets. I don't think the current TLS text is sufficiently clear(and normative) about that. Section 4.1.1 has: \"Important: Until the handshake is reported as complete, the connection and key exchange are not properly authenticated at the server. Even though 1-RTT keys are available to a server after receiving the first handshake messages from a client, the server cannot consider the client to be authenticated until it receives and validates the client's Finished message. The requirement for the server to wait for the client Finished message creates a dependency on that message being delivered. A client can avoid the potential for head-of-line blocking that this implies by sending a copy of the CRYPTO frame that carries the Finished message in multiple packets. This enables immediate server processing for those packets.\" URL\nI agree, perhaps this section needs to say something along the lines of:\nHow is this not a duplicate of ?\nNevermind, I can fix this easily."} {"_id":"q-en-quicwg-base-drafts-56b23823a110c3649c7861cdefb0a278ac86570e0e9dbe531b3ee5b7dd625170","text":"This was always intended (and also written in the description of FRAMEENCODINGERROR), but didn't show up in the text somehow.\nI agree with NAME about the MUSTs that you shuffled around are statements of fact, not interoperability requirements, but the addition is fine."} {"_id":"q-en-quicwg-base-drafts-f70e37a2d4a4d6a234352b8e39093092271f6428862061d0a445880fcb2a7264","text":"…are: require unlinkability by stating: \"SHOULD NOT expose linkability\" suggest use of encryption when embedding expiration time in the token\nThanks NAME !\nMy understanding is that tokens provided by NEW_TOKEN frames should not leak information that allows observers correlate the newly established connection with the previous connection. Such information include the time when the token was issued, RTT, server name (when SNI is not used or Encrypted SNI is used), or even the size of the token. However, the draft seems to be vague about the requirement. I think it might be worth clarifying the principle in the draft. \"a token SHOULD include an expiration time\" (section 8.1.2) - I think we should change this to \"a token SHOULD be associated an expiration time\", considering the fact that we are talking about \"stateful\" design (stateless design is mentioned briefly in one of the following paragraphs). In a stateful design, an opaque identifier is the only thing that a server should be allowed to send. \"In a stateless design, a server can use encrypted and authenticated tokens to pass information to clients\" - I think we might prefer using \"SHOULD\" rather than \"can\".\nI think the current text is fine: \"include\" translates to \"put into the token and encrypt\" in my mind. Reading that a token is \"associated\" with an expiration time would give me pause.\nWould you mind clarifying where the \"and encrypt\" comes from? My point is that that is the way it should be, however the text does not seem to clarify that.\nI don't recall how I arrived at it -- perhaps some common sense. Now that I have implemented it, it seems only natural. Perhaps we should simply add \"and encrypt\" to the text instead of changing to \"associated with,\" which is less clear?\nI'm happy to know that we share the common sense :+1: As stated in the opening comment, my \"editorial\" preference goes to stating the principle that any information in an NEW_TOKEN token (other than the opaque lookup key used in a stateful design) SHOULD be encrypted. If that is to be met, I think adding \"and encrypt\" would be fine.\nI agree with NAME -- I think stating this explicitly as a SHOULD is sensible.\n(For the chairs, and triage purposes). This suggests a normative requirement that I don't think that we need, or that is already implied by existing text. There would be new normative text. I believe that is still editorial on the basis that the requirement is already implied, but will defer to your judgment.\nFWIW, what I am shooting for is a change like URL\nNAME and others, is kazuhoNAME kind of the change you were envisioning?\nI was asked to move here. I miss the point, that the server SHOULD NOT construct the same token multiple times because this leads to additional scenarios where tracking by a network observer becomes feasible. The referenced PR does not fix this problem.\nNAME I think we are in agreement that unlinkability between tokens is required. IMO the phrase \"SHOULD NOT expose linkability\" in the proposed text captures the concept, though I might agree that \"i.e., ...\" needs improvement.\nNAME NAME change would be adequate, and I agree with NAME that this is effectively editorial. Do the chairs want to mark this issue as ?\nNAME Yes, I agree with you on the phrase \"Should NOT expose likability\". To address my concern, I suggest to include \"This includes that the server constructs each time a different token when issuing tokens to clients from the same source address.\" after line 1560 of #kazuho/base-draftsNAME\nMarking this as editorial\nLooks good, modulo the minor suggestions."} {"_id":"q-en-quicwg-base-drafts-e08e62d67e74c26323a54cc3e01b8b16fd441addb39a362f1d35dd5da6a0ac80","text":"draft-ietf-quic-tls currently mentions AEADAES256CCM, however there is no way to use that AEAD in TLS 1.3 - TLSAES256CCMSHA256 does not appear in the . If I understand this correctly, the presence of AEADAES256CCM in the document was an editorial oversight.\nThanks. This was extremely confusing: you meant CCM, but wrote GCM in tons of places."} {"_id":"q-en-quicwg-base-drafts-9debc51186134a4fc233fa1228ac82fb69049a267539045a43d3e43f3e3f769b","text":"I must have missed this when making the PR to remove the version negotiation text. Thanks NAME !\nI wasn’t aware that there was an issue. Sorry for that!\nNo worries, thanks for doing this!\nWe can probably remove big chunks of section \"8.1. Protocol and Version Negotiation\" since we don't do version negotiation anymore in v1.\nFixed with . Thanks NAME"} {"_id":"q-en-quicwg-base-drafts-d7d2eaed0c9d16c2bb9d9e1dc74a6ad9615afc4f918a359514fb0db53bf5106d","text":"As of , the client MUST use the same handshake message.\nNAME That's quite complicated, and I'm not sure if we've chosen the right design here. I'd like to keep this PR editorial though. I opened to discuss this.\nIn URL, NAME points out that after a Retry, all 0-RTT packets sent in the first flight are still valid, although they will probably have been dropped by the server (unless there's significant reordering). That means that a client SHOULD retransmit all 0-RTT packets in the second flight. We now have two different scenarios: In the case of a HelloRetryRequest, the client is prohibited from sending 0-RTT, and has to cancel retransmissions for all outstanding 0-RTT packets. In the case of Retry, the 0-RTT packets from the first flight are declared lost (without any effect on the cwnd though) and retransmitted. Considering that Retry adds a roundtrip anyway (and thereby limits the usefulness of 0-RTT), and that we only expect it to be used by a server that's under attack (which makes it likely to refuse 0-RTT later in the handshake), does it really make sense to send 0-RTT in that case?\nNAME I am not sure if I agree with the observation. The transport draft states that a server MAY send Retry packets in response to Initial and 0-RTT packets. A server can either discard or buffer 0-RTT packets that it receives (). Notice that the server is allowed to \"buffer\" 0-RTT packets. I am not sure if I like this ambiguity (because I think most if not all of us will be dropping 0-RTT packets that carry the original DCID when sending a Retry), but IMO the draft is clear.\nNAME thanks for pointing that out. My point is that we should change the draft (i.e. this is a design issue, not an editorial issue). The reason a server (or a LB) performs a Retry is that it either has reasons to suspect that the client isn't reachable at the 2-tuple it sent the packet from, or that it is under load, and either uses the Retry to redirect traffic to a different backend server, or just to buy itself some time Under both circumstances, a server wouldn't want to allocate additional state for buffering 0-RTT packets. I'm therefore suggesting to change the draft such that: A server that performs a Retry MUST discard 0-RTT packets. A client MUST NOT use 0-RTT after a Retry was performed. Does that make sense to you?\nYeah, I can envision a server that considers buffering some number of packets to be cheaper than doing a TLS handshake. For such servers, delaying the TLS handshake until client proves it's existence but buffering the 0-RTT packets in the meantime would make sense. That said, I'd be fine with this, assuming that the discarded 0-RTT packets are the ones that carry the original DCID. I would prefer not having this requirement. Once the client is known to exist, a server needs to communicate with the client. Use of 0-RTT helps reduce the server load, because when used, average lifetime of connections becomes 1-RTT less. That leads us to having less concurrent connections.\nWhen our implementation does Retry, it's because we've hit a memory pressure limit and we want to protect ourselves from attack. If a client supplies us with a valid token, we will accept 0-RTT still, but we won't buffer it from the previous Initial (without a token). So, if the client gets a Retry and still wants to do 0-RTT, they need to retransmit it, with the new CID, in a new packet (with a new packet number). I don't see why that design should be prevented.\nOk, I see that there's value in doing 0-RTT after a Retry, and furthermore, there seem to be good reasons (prevention of tampering by middle boxes) to disallow 0-RTT after a HelloRetryRequest in TLS 1.3, as NAME pointed out OOB. Since we can assume that most (all?) server implementation will drop 0-RTT packets when sending a Retry, the only reasonable thing to do for a client is to treat all 0-RTT packets as lost and retransmit them. Considering that, it would make sense to require servers to drop all Retry packets from the first flight, as NAME suggests.\nNAME Thank you for suggesting a path forward. I think my weak preference goes to something slightly different. I think we are in agreement that most if not all the servers would not buffer the 0-RTT packets carrying the original DCIDs when they send Retrys. It makes sense to acknowledge the fact in the specification and suggest clients to retransmit 0-RTT data it has already sent. What I am not sure is if there is a reason to forbid servers from buffering such 0-RTT packets. From client's point of view, sending 0-RTT data once per connection is an optimization. Retransmitting 0-RTT data when the server sends a Retry is further optimization. If we think that way, servers that buffer 0-RTT packets when sending a Retry has increased chance of utilizing 0-RTT data. Compared to that, there is no reason to mandate servers to drop 0-RTT packets. To summarize, I think all we need to clarify is that servers responding with Retries are likely to discard 0-RTT packets that carry the original DCIDs and therefore that retransmitting 0-RTT data makes sense. But nothing more.\nWhat if client takes action on Retry, and the old 0-RTT state lingers - wouldn't that make servers more vulnerable to DDoS attacks from packet sniffers?\nThis also bears on . If the DCID doesn't change, then the server won't be able to differentiate the first flight of 0-RTT packets, which you're proposing to drop, from the second flight of 0-RTT packets which it accepts. The current text says that the server MAY buffer and process them later; you're saying that servers probably won't. I think that's fine, but some might -- hence, I favor NAME suggestion to retain the permission to buffer but the caution to the client that the packets were quite possibly lost/dropped. In terms of actual spec change, I think that means this text in : ...becomes a SHOULD instead of a MAY.\nIt seems like you have sorted this out. I have no problem with using \"SHOULD\" over \"MAY\" here. Note that the reason we have this distinction is logical separation of the mechanism of Retry and 0-RTT acceptance. Retry strictly precedes the connection attempt. No point in blocking 0-RTT if you have to Retry. HelloRetryRequest is different in that certain characteristics it might cause to reset affect whether the 0-RTT works. Retry is required to use the same ClientHello, so there is no need to have the two interact; you can treat a Retry almost as a loss event for the first Initial and everything works fine.\nSo, the effect of not changing the handshake message is that all previously-sent 0-RTT data is still valid, but has potentially/probably been lost rather than queued. Clients obviously MUST NOT reuse the packet number for different data, but in their not-resetting, they also can't change the data that they've already sent in those packets. They'll need to retransmit it, because otherwise there's a potential DoS attack by delaying/replaying a 0-RTT packet by one RTT against a server that does Retries.This PR looks good to me as an editorial improvement. Regarding NAME point, the transport draft explicitly states that a server sending a Retry can buffer 0-RTT packets that carried the original DCID, and process them after receiving a response to the Retry. To put it differently, I think we are clear on the semantics and that there are ways to implement this without having security concerns. That said, I understand that NAME has opened a design issue () to discuss if that is the design we want to have.LGTM"} {"_id":"q-en-quicwg-base-drafts-eb5cdc664e016390a11c855a5fcbc2bb97b952e06536574f187093b39f5c89d0","text":"This\nThe transport draft talks about QUIC being resilient to NAT rebindings. However, if a QUIC server is behind an L3 load balancer which simply routes based on 5-tuple, then connections to this server will (likely) not survive rebinding. I couldn't find any language which addressed whether such a deployment was \"OK\" or not. I think that since this load balance does not support NAT binding resilience, it is implicitly \"bad\" according to the draft, but others might disagree. In any case, I think there should be text to address this. Note, if the server advertised a preferredaddress which routed around the load balancer, and if clients were required to use this address, then that could obviously work. But preferredaddress support is a SHOULD, not a MUST.\nWe did discuss this issue. Use of empty connection IDs and demultiplexing based on source addresses. The conclusion there was that this was OK: people could do that, as long as it was clear that what they were getting was not different than TCP. That is, we made it clear that if you don't use the identifiers that you control (your addresses, the connection ID), then you accept that you can't handle migration at all and connections will drop (well, unless you do something like trial decryption, which has some obvious scaling issues). The outcome of that discussion was captured in . Assuming that Ryan finds this answer satisfactory (and I haven't missed anything), then I suggest we just close this as a duplicate of .\nI don't think this captures Ryan's concern: when the server is using a non-zero CID and also is behind 4-tuple routing. I think he's looking for a section of text to coherently describe what you have to do here: send either disablemigration or preferredaddress; or forward packets between servers; or either don't use a common stateless_reset key, or put client address/port in the reset token. This is sort of ops-drafty but there real requirements on servers to behave in a secure way. I think someone with full command of the transport draft would figure this out, but it is not clearly stated anywhere.\nI agree with Martin Duke and the language he suggests makes sense to me.\nThis does feel a bit ops-drafty to me, but I'd be happy to review a PR.\nNAME notes that: We need to normatively reference 7540 for this capability.\nSee also discussion thread starting at URL Hi, RFC7540 allows crumbling cookies for compression efficiency with HPACK. However, neither quic-http nor quic-qpack drafts mention cookies or crumbling. Is it considered safe for a QPACK implementation to crumble cookies before compression and concatenate them after decompression? If so, should the draft include clarification on this? Thanks, Bence"} {"_id":"q-en-quicwg-base-drafts-42c7f21da9890ba7f12eaf77a24b11bc85981c7ba959231b296a890147967a78","text":"This just writes down what was proposed in .\nLooks fine, but what about some guidance on stream vs connection errors? And also the principle of nearly always choosing a hard protocol violation error, or a more granular error with same effect?\nStream vs. connection is something that application protocols (like HTTP) need to worry about. The transport only concerns itself with connection errors. The idea that you pick the most helpful error is not something that we need to codify (in my view).\nIsn't that worth mentioning? Or do we already have such text?\nThis is already discussed in Section 11, which makes me wonder if it would make more sense for text on this PR to be placed there too.\nThis I do not disagree with. I was merely suggesting that we explain why, as a general rule, a hard error is preferred as a design principle, not the nature of the error message which I do think should have a large degree of freedom.\nMaybe there is a followup needed for the HTTP draft. If the principles are generally the same, we can copy the text and then add some more to cover the idea that we make most errors fatal to the connection rather than the stream.\nI like Lucas suggestion of adding this text to the existing and already fairly complete text in section 11.\nI moved the second paragraph up to the error handling section, but the text about how codes are selected really belongs with the codes themselves.\nWe don't really have any real principles that we agree on for deciding what error codes we are describing. Proposal: if the error carries distinct semantics (like stream rejection in HTTP), then it gets a new error code if the error is frequent or particularly significant, then it gets a new error code otherwise, target a more generic error code that identifies the broad area of the protocol finally, if there is no more specific applicable error code, use PROTOCOL_VIOLATION\nis example of an issue discussed where it would have been useful to have principles."} {"_id":"q-en-quicwg-base-drafts-a61b94d229cf00a768023aee966e5f390de6754d99c538eee4fb914931bbfb10","text":"As discussed, this is a) hard to do in all cases, b) impossible to do in many, and c) not that useful.\ngood\nWhen multiple connections share the same network path or multiple connection IDs are used for the same path, using the spin bit is complicated by the difficulty in correlating forward and reverse flows. There is currently text that uses RFC 6919 language in the draft recommending coordination of the spin to avoid this problem. However, as , this coordination seems difficult and therefore unlikely. It also creates a strong signal that indicates that all the connections on the path are terminated at the same endpoint on both ends. Maybe we don't want to be generating that signal. Also, because this coordination can't be guaranteed, the on-path observer is forced to build code for correlating/segregating measurements by connection ID in the case where there is no coordination. Maybe it's best just to remove this recommendation entirely.\n+1 to removing the recommendation, or changing the statement to just a caution (e.g., if you are to coalesce connections that spin, then you'd better be aware of ...). FWIW I am not sure if I agree with this observation. IIUC, the connections sharing a 5-tuple will be terminated by the same endpoint on both ends; it is impossible for a middlebox to coalesce multiple QUIC connections. It is impossible because CIDs can change mid-connection, and a middlebox cannot determine to where the packets containing the new CID must be sent.\nThere definitely needs to be a degree of coordination between the middlebox and the endpoints, but that doesn't mean that you can't use something like the quic-lb stuff on either end (or both ends).\nYes, I think it is better to remove that recommendation, but perhaps replace it with a note to observers."} {"_id":"q-en-quicwg-base-drafts-6bb501778b60b28a53f3c5d133f6f0460bb011593eaecf996d143660954898ae","text":"Use more uniform and more specific language in description of instructions, like \"follows\" instead of \"includes\" or implicit mention of a field. Add references to integer and string encodings where missing. Prefer internal references to external ones (also in Section 1.2). Explicitly allow a header block to not have any instructions after the mandatory prefix. Refer to Literal Header Field With Name Reference section in Literal Header Field Without Name Reference instead of verbatim repeating text on 'N' bit. Reshuffle sentences, insert references instead, remove sentences where references suffice.\nMostly good improvements; a few things to fix up further."} {"_id":"q-en-quicwg-base-drafts-6ae837c0cee281f1b985db913bc5da95c058a7626027f330e055a30e4d193118","text":"Is that not implied by the text here?\nKind of, but this text talks about the sender's perspective. Furthermore, checking for duplicates is inherently racy if you're using tokens for new connections while the old connection is still alive.\nYou know, I think that I put some proposed fixes for this in c016639e39c with rather than this branch. Moved it over here. NAME you approved , so I hope that moving that commit over here works for you.\nSec 13.2 of QUIC Transport explains what to do for the loss of a packet for every stream type except NEW_TOKEN. IMO it's nice but not critical to repair this loss, but I care less about how we resolve it than that we write it up.\nSomewhat related question: Does this section fit better in the recovery draft?\nI don't think so. This talks about the repair semantics of these frames, not the process of detecting loss. Recovery doesn't care about the distinction between PING and STREAM when it comes to repair.\nLGTM, but do we need some text saying that when receiving a NEW_TOKEN frame, you MUST make sure that it's not a duplicate? Otherwise, a client might just save the token twice, and effectively reuse the token."} {"_id":"q-en-quicwg-base-drafts-1606a556476e356806d2b90b1d1ff257d8ee11e7c17aff9790cbcbc47a265195","text":"The initial request was simply to add sub-headings. I still feel like Security Considerations is a bit of a hodge-podge, and would welcome concrete suggestions or PRs for tightening it up.\nCan you add a sub-heading(##) for each of these considerations, like in the transport draft? Originally posted by NAME in URL"} {"_id":"q-en-quicwg-base-drafts-170078558c54dcfaaf330a5e4cb533048f58a051bfeae4013b29a95022d8ea06","text":"I assume the intention was to forbid the same setting identifier occurring multiple times in a SETTINGS frame, regardless of the values. However, since a parameter is defined as an identifier-value pair, the current text does not forbid duplicate identifiers if the values are different.\nAn alternative below. But this is good either way.Looks fine to me, modulo the line length."} {"_id":"q-en-quicwg-base-drafts-0afb6212e6b7f62a6fbb77a07953f5c76d8ce1422d79c20ae10f07500617dd28","text":"The nonce can just be opaque bytes. The block counter is tricky, as noted in the issue. The \"obvious\" choice is little-endian, as that is consistent with the philosophy of the designer (as I understand it), but that is not anything more than a guess. Absent strong evidence that big-endian is a better choice, I'm going to err on the side of not making a substantive change here. But I think that we need a consensus call to support that viewpoint.\nHere's what 5.4.4 says However, RFC 8439 says: This is a little confusing, but I think the correct way to read this is that the ChaCha function takes the nonce as a 96-bit opaque quantity, and the fact that it's little-endian is an internal detail irrelevant to our purposes, so we should just replace the second sentence with \"The remaining 12 bytes are used as the 96-bit ChaCha nonce\". This is reinforced by the fact that the test vectors have the nonce as a byte block. The situation with the counter is more complicated because we are told it is both a counter and 32-bits (which might make you think it's an integer parameter), but it is also treated as a little-endian integer, which makes one think it's a bitstring. And then the test vectors treat it as a value, rather than as a bitstring. If you look at code it typically thinks of this as an integer: See, for instance: URL So, I think the right answer here is to think of ChaCha as having the following API: Note that because the value on the wire is in big-endian format, this would invert the nonce, but I think it's the right answer anyway. Otherwise, you're having to pass in the counter in the wrong format on big-endian platforms.\nAs a completely general note, I don't think it is right to change endianness of crypto wire formats. AES-GCM has a completely backwards neither little nor big endian encoding (byte and bit swapped so no platform wins) this remains so on the wire. Especially for HW processing this is significant. But little endian is also more performant on nearly all relevant platforms today so there is no need to go over board forcing big endian if it can be avoided without insulting RFC's.\nYeah, this should simplify that text some.\nI like the proposed change, the text had somewhat confused me when implementing this in a previous life.\nI don't think that the reinterpretation of the little-endian thing is safe, as per NAME comment. The takes a byte buffer as input rather than a . That is to allow for different block counter sizes. I'll make the editorial change, but leave the suggestion there alone.\nI have a question out to the editor of the PKCS spec. It appears like there are no solid rules in the spec to support any particular position here. I'm going with my best guess at WWDJBD here, but I'm happy to make a change if there is a strong case made for something else.\nNAME NAME can we flip this to please? I don't think that we need to change the substance here, but I want to confirm the little-endian/big-endian choice here. I hope to have more information to share on the question soon, but that might not be definitive. We might just have to make a call ourselves. Absent better information, I'm going to suggest that we continue with little-endian, but I this is far from an obvious conclusion.\nThe API I'm using for ChaCha20 takes a const uint8_t nonce[12], and I currently pass in a pointer (offset by 4 bytes) to the sample as the nonce, without doing any endianness conversions. I'm in favor of clarifying the language for this input. (Given that this has the design label instead of the editorial label, my design opinion is to keep with the simple thing of passing the bytes directly from the sample to the nonce input of ChaCha20 and not do any endianness conversions. I'm assuming that other crypto libraries have similar APIs though.)\nThe API I have takes . PKCS takes pointers just like the API NAME refers to (but it fails to specify the endianness, which is a critical oversight, compounded by the fact that the size of the counter can vary). I suspect that - as NAME says - the right idea is to note that ChaCha20 will interpret the nonce as three little-endian integers and that the counter might be expressed as a byte sequence that is then (probably) interpreted as a little-endian integer.\nI understand and appreciate this being marked as “design”, though I consider the text that landed in master as editorial, as it (in my view) ended as adding practical advice while retaining the definition we have had.\nYes, but as a design issue, we want to confirm that \"no action\" is the correct outcome. It's cheap enough to do so.\nI like the updated text. It reads good as it gives the practical advice first, then amends that with the formal definition."} {"_id":"q-en-quicwg-base-drafts-651eb63082a61fa46341734cc08b590f940d955c6b5fb8a0d6d018e1e91a22dd","text":"Invariants text was unclear on where the Destination Connection ID field was and how big it could be.\nSection 4.2 of the Invariant Short header are not explicit that the Destionation CID follows the first byte: \"A QUIC packet with a short header includes a Destination Connection ID. The short header does not include the Connection ID Lengths, Source Connection ID, or Version fields.\" As see it just says that the destination CID is part of the short header, not where to find it.\nAlso the length of the DCID in the short header is unclear. With the long header's restriction on both source and destination CID I don't think the CID is of arbitrary length as stated in Section 4.3. I think it is 0 to 18 bytes long as set agreed by the connection.\nYes, DCID follows the first octet, and we should say that. As for the length, 0/4-18 is a restriction that we agreed applies only to long headers. You can - if the QUIC version allows it - negotiate other lengths for connection IDs using other methods.\nOk, the spec is in some sense clear that this can be of arbitary length. However, that this is not required to be consistent with the long header is far from obvious and probably should be explicitly remarked. An realization I didn't have before over this design is that a given destination IP address and UDP port that deals with multiple QUIC connections will be foreced to use one commond DCID length across all QUIC connections using that port, otherwise it will have to use prefixes for DCIDs to be able to determine how long the actual DCID field are to enable decoding the header protection. It might have been a misstake to not include an DCID length field.\nThe endpoint can decide on its own DCID length encoding, so that is fine.\nOne editorial suggestion, but LG"} {"_id":"q-en-quicwg-base-drafts-8c8484a600c934974a5094227eeee22fa6866c4336730aa0aa60476b0ae8bc36","text":"This changes HTTPUNEXPECTEDFRAME, which is used only once, to HTTPFRAMEUNEXPECTED that is used multiple times.\nDuring the London interim, it was noted that error codes refer to things being \"malformed,\" \"bad,\" \"invalid,\" and \"unexpected.\" Pick one and use it consistently, or establish precise definitions for each. This applies to both the current document and .\nThe three words are not equivalent: while both \"malformed\" and \"invalid\" are \"bad,\" \"malformed\" is not the same as \"invalid.\"\nMinor correction here, \"bad\" is only used in . The only instance of \"bad\" in current document is a substring of example \"0x1abadaba\". I'll fix the PR before it lands, whatever order we end up doing things in.\nThe different value-judgement terms are corrected by the flurry of settings PRs that NAME already did. HTTPFRAMEERROR has replaced malformed, and UNEXPECTEDFRAME and WRONGSTREAM are reasonably descriptive (though see , ). We do still have some variation between HTTPTHINGDESCRIPTOR, HTTPTHINGERROR, and HTTPDESCRIPTORTHING (e.g. HTTPUNEXPECTEDFRAME versus HTTPREQUESTREJECTED) that might be worth reconciling. Is it worth turning into HTTPFRAMEUNEXPECTED?\nResolved by and something prior to it.\noops, thanks"} {"_id":"q-en-quicwg-base-drafts-02caa24f2e5aafcdac85492415c17098cb880e5caaeaebcad8ac24bcc1506fdf","text":"Fixing\nchanged the rules regarding what kind of packet an endpoint is allowed to send after receiving a non-ack-eliciting packet. The intention of that text used to be to prevent peers from getting stuck in an ACK-loop, but now the text says: Among others, this means that once an endpoint receives a non-ack-eliciting packet, it's not allowed to send a packet containing only PADDING. This would be valuable for defeating traffic analysis. While it would be possible to hack your way out of this by sending a PING + PADDING packet, I'm not sure if this was the intention of .\nTo me, the key phrase here is 'in response to'. If you want to send PADDING for some other reason and the congestion controller allows it, then you can send it.\nThis feels editorial...\nI don't intend to do anything about this one. As Ian notes, you can use other reasons to justify sending of anything, just don't justify the sending on the basis of having received a non-ack-eliciting packet or you will get into the sort trouble that doesn't end.\nI think the text is clear that sending PADDING is allowed at any time. I'm closing this with no action.\nquic-transport draft 13.2.1: An endpoint MUST NOT send a packet containing only an ACK frame in response to a non-ACK-eliciting packet (one containing only ACK and/or PADDING frames), NAME points out that this allows one to send an ACK+PADDING packet, which seems to violate the spirit of the rule. This loophole breaks . I believe this may be Editorial if we all know what we meant.\nYes, it was intended that ACK should not elicit an ACK+PADDING packet.\nI agree with this change, but I disagree that this is a satisfactory fix for .\nThis issue should have been\nLGTM. Though all the \"non-ACK-eliciting packets\" is a bit of a mouth full."} {"_id":"q-en-quicwg-base-drafts-fdb9eed3abac2683d63323191fd673910b2939f42da9659ad203d89151bc4437","text":"Came from the discussion of\nNAME : I realized after we did this PR yesterday that this does not I It reduces burstiness in general, which is still a good thing, but it doesn't change slow start increase. I'll send a PR or ask Vidhi if she wants to.\nWhen an ACK arrives that acknowledges N packets (where N could be very large), QUIC will increase the congestion window by more than 1MSS per congestion window during Congestion Avoidance causing bursty send. This could also affect the slow start. if (congestionwindow < ssthresh): // Slow start. congestionwindow += URL else: // Congestion avoidance. congestionwindow += kMaxDatagramSize * URL / congestionwindow One could argue that QUIC ACKs every other packet, but there ACK packet could get lost or ACK for ACK packet could get lost. The draft is probably assuming packet pacing but not all implementations support that. I discussed this with Jana and he thinks that we should add something in pseudo code or some specific text in the draft.\nPR added more specific text on avoiding bursts into the network, FYI.\nPlease note that PR addresses the issue of app-limited or pacing-limited cases to not increase cwnd when it is under utilized. My concern is cwnd being inflated due to an ACK frame containing ACKs for large number of packets.\nNAME : The problem here is that we don't talk about burstiness that can occur when pacing is not in place, and a stretch-ack is received. PRR handles this, but we don't do PRR in the draft.\nNo we need to spend lots of time on Reno as everybody appears to be doing Cubic or BBR?\nCubic has this problem. BBR cannot be assumed.\nExcept Cubic opens the window based on time spent, not just ack received\nI don't get the scenario. It seems that NAME is worried about receiving an ACK for a large number of packets. This is not a congestion window problem, because it happens even if the congestion windows size remains constant. The root cause is that the ACK of N packets now reduces the number of packet in flow by N. So this is strictly about pacing, not about congestion control.\nNAME yes, and it can and still does burst because of stretch acks\nNot just stretch Acks -- deliberate Ack compression by the network happens too. Very few alternative to pacing.\nURL describes\nDiscussed in Cupertino. NAME to prepare a PR that hopefully can incorporate some ideas from RFC3465 to add safety features for non-pacing stacks.\nBut 3465 does not solve the burst problem. It limits the growth of the window, but does not do pacing of any kind.\nI think we can change to something like this\nNAME I think we need to do 2MSS clamping per ACK received and not per packet acked. I am thinking that we can remove OnPacketAckedCC and instead do OnAckReceivedCC. Pseudocode change something like this:\nI think we can keep but just move cwnd update to as you propose.\nYup, that's an important missing fix, though it needs to be applied only when not pacing. Additionally, helps with burstiness in general. NAME : I would make a variable, and apply the min here when not pacing.\nThe proposed change has the effect of reducing the effective Window if ACKs are compressed. Say you have a window of 12 packets, acks are delayed, and a single ACK acknowledges 12 packets. Now you have a window of 2 packets, your transmission rate is 6 times slower than before, and it will take you 10 RTT to recover. Is that really what you want?\nNAME This is what ABC (RFC 3465) does for TCP. ACK compression has side-effects, and the way to deal with it is, as you say, to pace packets out. This reduction in performance is only when the endpoint does not pace, and it is the only way to limit bursts in that case.\nAlso, ACK compression does not necessarily imply ACK pruning. I have traces in which 1 RTT worth of ACKs is compressed and delivered back to back. In that case, the proposed algorithm will do nothing: you will get 2 MSS added for each ACK, back to back within less than 1 ms.\nFYI. FreeBSD implements rfc3465 (and enabled by default) URL Linux removed tcpabc implementation but saying it does in a different way: URL Considering FreeBSD doesn't have pacing by default, I think current proposed algorithm is not worse (or aggressive) than what tcp implementation does.\nNAME I am working on the PR for pseudocode changes.\nI agree that when not pacing, using an ABC limit is appropriate. FWIW Windows TCP was using a limit of 4 and not 2 for a long time and now uses 8 which is pretty close to IW10 burst in slow start phase.\nFor the WG: The proposed PR attempts to include a mechanism to limit CWND increase during slow start when not pacing. I'll note that by itself, it does not guarantee we follow the existing MUST: \"Implementations MUST either use pacing or limit such bursts to the initial congestion window, which is recommended to be the minimum of 10 maxdatagramsize and max(2 maxdatagramsize, 14720)), where maxdatagramsize is the current maximum size of a datagram for the connection, not including UDP or IP overhead.\" I don't think we should add this to the pseudocode, because we say SHOULD pace packets and the pseudocode currently only covers SHOULDs, not MAYs. Adding all MAYs to the pseudocode is too complex. I don't have a concern about adding a reference to the existing RFC as a mechanism that MAY help limit bursts during slow start. I'd like a decision from the WG on whether to include this change and if so, some criteria on what should be included going forward.\nI agree with the comment that burst from ACK compression or Stretch-ACKs result in bursts. I think the ways to change that is implement a maximum burst size or use pacing. The bursts do not appear a CC function to me.\nBringing conversation in from the list: With regards to , one of the things we found in testing was that it could (a) still allow an implementation to be vulnerable to too much bursting (the original problem), but more importantly (b) locked receivers into an ack pattern of acking every other packet. My main concern is that this causes problems if receivers want to experiment with and change acking strategies going forward in way that otherwise would be fine. One way to look at the problem described by this issue is that the existing text has a minor underspecification, in which it has the notion of a burst limit but says nothing about the frequency of such bursts. Two \"bursts\" of 10MSS that are sent within a couple nanoseconds are really one burst that's too big. Another way of fixing this is to remove even the specification of the 10MSS limit if we really don't want too much text about the non-pacing cases. Essentially, what this comes down to is: pace, or even if you don't pace, you need to not send bursts above some frequency; and if you don't set some pacing timers, you'll likely underutilize the link.\nI'm going to put this out there: the answer is MUST pace. You might formalize this by saying the number of packets sent in any interval cannot exceed by more than the initial cwnd (i.e., the 10MSS). That allows for bursting, but requires a period of quiet either preceding or following any burst.\nNAME Yes, it does largely reduce to \"implementations MUST pace\" (even if the pacing algorithm isn't very nuanced, in which case you'll perform poorly). The formalization of the rate, one option for which was specified in , I think is the heart of what should be done here.\nThere are two routes that we can take: Specify clearly what 'delay` refers to in the below statement, which is done in OR Change MUST pace to SHOULD and remove the 10 MSS burst size limit and provide general recommendation if an implementation doesn't pace. Something like:\nI'm sympathetic to saying MUST pace, and I like the formatlization that NAME suggests. I've heard some pushback on saying MUST pace in the past, but it might be worth giving it a shot again. NAME NAME NAME : What do you think?\nThat said... I will note that NAME 's formalization allows for a burst at the end of an RTT. Consider for example when a single ACK is received that acknowledges an entire window. That would be legal by this definition. I think that's ok.\nDiscussed in ZRH. Proposed resolution is to define and REQUIRE pacing.\nNAME I think requiring pacing makes good sense, with some formalization of what that means.\nTo clarify the status here, my understanding is that the proposal is to close this design issue with no action. The discussions have identified editorial improvements that will be addressed independently.\nLet's continue the editorial discussion on .\nClosing this issue and continuing discussion on\nThis needs to run through the consensus call procedure, so we'll keep it open for another week.\nLet's get this in -- it addresses some burstiness anyway.LGTM"} {"_id":"q-en-quicwg-base-drafts-b1455ede8347f90daf53b6a7e5923f04c156e61012c78ba5a47e4cede9f4db81","text":"If the Retry token is known to be invalid by the server, then the server can close the connection with INVALID_TOKEN instead of waiting for a timeout. Came out of discussion of Issue Related to\nYou need to update the \"Transport Error Codes\" section too.\nThanks NAME I did that in a subsequent update. Refresh and tell me if it looks correct.\nNAME you're missing the section that gives some text to each error code .\nNAME do you have a suggestion about what the client should do? Clearly it could start a new connection, but it would't want to do that an unlimited number of times, and it may depend upon whether a TCP fallback is available. Given that, it wasn't clear I should recommend something. And I agree making them distinguishable should be a MUST, as stated in\nI think that the best thing we can say is \"the connection attempt failed\". The first version of the PR strongly suggested that making another connection attempt was the right thing to do, but that gets into lots of difficult questions about identifying why. Given that this is an error that only happens if a Retry is spoofed, then I think that all we can say is: !\nWhen the server receives a Token that is corrupted, it may want to close the connection quickly. This issue suggests adding a new error code to indicate the token is corrupted, INVALID_TOKEN.\nThis is - I believe - a design issue. The PR seems to have approval, we just need the appropriate process run.\nWe currently say that these tokens SHOULD be distinguishable by a server on the basis that an error in a NEW_TOKEN token is recoverable, but an error in a Retry token is fatal. I don't see why this can't be MUST. If we make it \"MUST\", then we can rely on this property in the design.\nAgreed, that'd be a useful property to rely on.\nDoes that property need to hold if there are bit flips in the token? If you’re encrypting the token, this would be easy to ensure: put the distinguishing bit under crypto cover. If you’re using a map lookup, you’ll probably have to work a little bit harder.\nPlease ignore my last post, I mixed up Retry packets and Initials. Tokens in the client’s Initial are part of the AAD, so we don’t have to worry about bit flips.\nDepending how the server distinguishes them, a change of method / confusion about what server you're sending it to could make you appear to be presenting a Retry token instead of a regular token. This error code enables the client to recover and retry quickly, but there's no language about what the client should do when it receives the code.Rereading the changes, I have the following editorial suggestion."} {"_id":"q-en-quicwg-base-drafts-7dc2d66bcd63e53cfbba19381557a64f976219e6996f74bc13cba5d0bae74148","text":"This was inspired by the comment by afrind at URL For each representation, uniformly use the word \"representation\" in the first sentence; use the word \"identifies\" if it's a verbatim entry; \"encodes\" otherwise; identify relative versus post-base indexing in the first sentence; move drawing right after first sentence; use consistent language describing binary format."} {"_id":"q-en-quicwg-base-drafts-4c7395a421428f4b374a3483f146b8d08b3f76c0a02ad7157b013da6ef0000bd","text":"In , we say that one of the required operations on a connection is \"enable 0-RTT\". NAME : The current text implies that implementations are required to support 0-RTT, which is not the case. It would be more correct to say that applications must be able to prevent early data from being offered on a connection, even if the implementation supports it.\nAlternately, the server section prefaces certain bullets with \"If Early Data is supported, ....\" That formulation might be used here as well. Also, this section uses \"0-RTT\" in the client piece but \"Early Data\" in the server piece. Should be consistent."} {"_id":"q-en-quicwg-base-drafts-ff8468e1c609b252700b2f091977e9c747422ec199383beb18f47b85c87197fa","text":"It is permissible for a client to do 0-RTT resumption with a NST from one connection and a token from another. This clarifies that a client only needs to associate a token (from a NEW_TOKEN frame) with the server it was from and no additional state, and that a server can't require that a token be from the same connection as the NST in use.\nThe token is used for deciding whether the server believes the packet it's receiving actually comes from the address it says it's coming from. I could believe a server might trust a client only if it's doing a 1-RTT handshake, but not trust it if it's doing a 0-RTT handshake, so might encode that bit in the token. I don't think the server should be putting more specific bits like \"this is the NST that should be used\" or \"here are the SETTINGS from the previous connection\" in the token. If the server doesn't like a token it receives with a 0-RTT handshake, it can send a Retry packet but still continue with the 0-RTT crypto handshake (even though it's now 1-RTT from a transport perspective). I don't see the need for there to be fate sharing between accepting a token and accepting 0-RTT - a server can reject one without rejecting the other. I've created to express the problem statement as an issue.\nIf you're going to continue with 0-RTT, but not trust the token, why not just limit the amount sent by the amplification factor? The receipt of a Handshake packet constitutes path validation. Also, a Retry doesn't allow you to acknowledge the client's Initial or any 0-RTT packets.\nFor a 0-RTT resumption in H3, a client needs 4 pieces of state previously received from a server: 1) a NewSessionTicket 2) Transport Parameters 3) SETTINGS 4) a token (from a NEW_TOKEN frame) The last isn't strictly necessary, but without it the server is likely to burn a round trip with a Retry packet to prove the client controls the source address. On a given connection, there is only one server SETTINGS and Transport Parameters, though there may be multiple NewSessionTickets and tokens. The first 3 pieces of state are necessarily required to have come from the same connection. A NST (from any connection) is needed since without it resumption can't happen. Transport Parameters and SETTINGS must come from somewhere for 0-RTT to be useful; the obvious and currently specified place is the SETTINGS and Transport Parameters that were sent on the same connection the NST came from. There's no such need that the 4th piece of state come from the same connection as the others. The token is used to prove that the client controls the address the packet is purportedly coming from. This can be done separately from any crypto state (in the NST), application state (SETTINGS), or the rest of the transport state (Transport Parameters). For a client to store the state needed for 0-RTT resumption, one possible implementation is to create cache entries containing an NST, TP, and SETTINGS. It's easy to do that where the TP and SETTINGS get duplicated N times, assuming N NSTs were received on a connection. If M tokens were received on that connection, and tokens need to be added to those cache entries, either the M tokens need to be paired to the N NSTs somehow, or all M tokens are in all N cache entries. This introduces noticeable complexity to a client's session cache implementation to correlate unrelated things. It should be clarified that the client doesn't need to associate the token with the rest of the state needed for 0-RTT. has suggested wording for that clarification.\nAs pointed out in NAME in URL, the proposed change essentially forbids a server from storing SETTINGS in the token. I think I'm persuaded by NAME argument above that we'd better forbid such coordination, as it would be a burden to certain client implementations. OTOH, I think that untying the relationship between session tickets and tokens does open a new issue, see .\nI have some editiorial comments, but the design looks fine to me. While we might embed some information related to 0-RTT configuration in QUIC tokens, they would be decryptable and be usable, so we do not think there would be a problem for us (thanks to NAME for checking).A couple of editorial suggestions."} {"_id":"q-en-quicwg-base-drafts-15ddd71e216a34974e9ab57960579ecb7b306c902135b50a9ca4e54029ca6a5f","text":", requires transport stacks to support abrupt closure of a stream even after the stream has been cleanly closed. This reflects the possibility to transition from \"Data Sent\" (stream has ended) to \"Reset Sent\" by sending a RESETSTREAM. Based on the state diagram in the transport doc, if an implementation was already in the \"Data Recvd\" state, they wouldn't actually send a RESETSTREAM, as they're already in a terminal state; a reset call would be a no-op or an error of some kind. However, NAME points out that in a handle-based API, this essentially means that handles for closed streams have to be recognizable as such for the lifetime of the connection. This might be burdensome for some implementations. Should we reword the requirement here to say that it's permitted if the stream hasn't yet reached the \"Data Recvd\" state.\nThis was clearly just a miss on the wording. I don't think that the intent was ever to require that streams be kept around indefinitely. Sure, some stacks might be able to send RESET_STREAM at a whim, but it's not necessary"} {"_id":"q-en-quicwg-base-drafts-5e5293d1d4477e9ecc4c702a349dea02ac9a44fd9db808c802bc60586ec6afc9","text":"I didn't notice this change in NAME PR , but I think the older text is more correct. I added a bit more detail about why it cannot be treated as an acknowledgement. URL\nNAME I updated the text to reflect both the lack of processing and packet number."} {"_id":"q-en-quicwg-base-drafts-4f027ff651f26f8241c6a05e4337c4bfa84f5609999e54da3b1a2fb2864fbb7f","text":"I was trying to come up with text for this but this PR looks great.\nI believe there's an extra \"is\".\nCan you write a PR? Removing either 'is' doesn't read well to me, but I suspect a sentence rewrite could be an improvement. ie: \"A loss epoch ends when a packet sent after loss is first declared is acknowledged.\"?\nI agree a rewrite would be better."} {"_id":"q-en-quicwg-base-drafts-b83e18fc43c057e6a230f9a6ae908d30ada402a58456aab7711eee8319a2b962","text":"Lest some readers get the impression that each request consumes a single QUIC stream and each response consumes a single QUIC stream."} {"_id":"q-en-quicwg-base-drafts-64dd189a51c0d40dc290361a49e73409daced9f4a2261d88f86956b849a6adc6","text":"From Gorry's review.\nGorry suggested that the QUIC recovery draft should specifically recommend against packet reordering thresholds less than 3 because it will not be sufficiently tolerant of real-world reordering. Pull already created.\nFWIW, I think this is likely design because it adds a SHOULD NOT, but it also doesn't substantively change the existing recommendation."} {"_id":"q-en-quicwg-base-drafts-713f3c70d7284e43faef8e67f26c6ec37fdf3700c59a2784bbd98f1044480234","text":"This turned out to be fairly simple, but I think that it's worth writing down. I don't have concrete recommendations, but I could add them if people thought that they help. See the commented-out bit.\nThanks both for the reviews, I've taken some of the suggestions and moved this to the appendix.\nAfter discussing with editors, we agreed that this was a reasonable interpretation of existing text. We do agree that it would be worth discussing Mirja's views on ECN reliability and possibly changing the algorithm to reflect that, but it would be a design change.\nIn , NAME suggested that a different algorithm might be better to recommend enabling ECN and only disabling it if a failure is detected. That algorithm adopts the same posture as the draft: that ECN is optional and that endpoints are permitted to be cautious in enabling it. If we want to adopt the view that ECN works and endpoints are required to use it unless it breaks, that's a bigger change. I open this issue, while being aware that this is re-opening something we discussed and agreed previously, without there being new information to present. Chairs, if you do think we should discuss this, it's a design issue.\nI would not be in favor of reopening this.\nJust to clarify the proposal was that if you decide to enable ECN, you should just do it and only disable if you notice a failure, instead of having the more complicated probing logic that we have right now (in the appendix I think)\nThe shows that out of 14 implementations doing active tests, only 3 demonstrate ECN support. No ill will, but major places like AWS or the Windows OS do not support ECN, so it is not clear what more text in the spec would change.\nThis is just about this one paragraph: The concrete proposal would be to remove this paragraph.\nDiscussed in ZRH. Proposed resolution is to close with no action.\nI will make a proposal for an editorial PR for the paragraph mention above to make it sounds less scary that this risk exists.\nECN-capable is not a constant condition, but we don't have any algorithm that governs it. I don't think that we can just say \"mark until validation of ECN counts fails\". Maybe the following algorithm is OK: When starting a new path: set ECN state to \"testing\" after N packets or M round trips, set ECN state to \"unknown\" on every ACK, check ECN counts if ECN validation fails set ECN state to \"failed\". if ECN validation passes and ECN state is \"uncertain\", set it to \"capable\". when sending packets, mark with ECT(0) if state is \"testing\" or \"capable\" (I know that this is unchanged text, but I just realized that we could be clearer.) Gorry pointed out that we can add an extra thing: if a path is marked as \"failed\", you MAY choose to occasionally switch back to \"testing\". Mirja pointed out that we need to deal with bleaching more than we need to worry about black holes. We should negotiate that. Originally posted by NAME in URL\nI also would suggest one more intro sentence could please some to say \"When a path supports ECN, this enables ECN marking and uses ECN for a new path\" - to effectively say that the validation is about how to detect and react to problems with a path that is not ECN capable.\nDiscussed in Cupertino.\nPresumably this is just another section in Section 13.4? That would be useful."} {"_id":"q-en-quicwg-base-drafts-1f1ed2ed65111395c2b3a171b2b38f4656cc3819d2bf1bbd910467daee34410d","text":"Note also the editorial changes in , which are great and still needed: this adds a little bit more to the reference from those changes to do the key clarifying for . The remaining clarification about is coming in the SPA PRs, but I wanted to look at this paragraph in one place.\nSection 12.2 makes two quasi-conflicting statements about coalesced packets. In a connection where the sender has multiple CIDs in hand, using two different CIDs in the two coalesced packets is entirely consistent with the guidance to the sender, but would be dropped based on the guidance to the receiver. We should align these two statements.\nFixing this would require a change to normative language. I'm of the opinion that the first should change to different connection IDs rather than connections; the second might change to a MUST NOT, but it could remain a SHOULD and achieve the same end. The main reason for this is that routing will be based on the first packet. But there is also a secondary benefit in not creating a strong correlation between different connection IDs for the same connection.\nBut this complicates the coalescing logic, because you no longer only need to look at packet types but now also CID fields. At least in my stack, that is an ugly change to make at this stage.\nThe alternative, which I can now see is probably a smaller change is:\nWhat motivated this was two observations: we don't coalesce except during the handshake, where we are not migrating some stacks generate packets for coalescing while processing, and so they might generate a datagram with multiple connection IDs\nNAME writes: This is much simpler alternative than what is being proposed. Now an implementation can only drop a packet after doing a connection lookup, whereas before it could just compare the CID to the previous value. If this is just a matter of personal preference, I vote for this approach, so now the score is 1:1 (with NAME being on the opposite side). To take a step back: why is recommendation to drop packets exist in the first place? If a sender wants to coalesce packets from different connections, why not let it?\nI think it should not allow having multiple packets for different connections in a single UDP datagram - please. Currently I see implementations (including mine) have something like: Receive UDP packet. Parse packet header. Check CID, find the connection handler. Pass the packet to the respective connection (thread) to handle the packet. I think it makes implementations much more complicated (already). Also I wonder it may cause security issue if having bad assumption?\nThe classic example would be coalescing Handshake packets, whose DCID is the SCID used by the server, and Initial or Zero-RTT packets, whose DCID is the client-chosen Initial DCID. The client MAY switch to using the server chosen CID for Initial packets. Maybe we should say that it MUST do that if it is going to coalesce Initial and Handshake packets. Not so clear about 0-RTT packets. Is it actually allowed to send them with a different DCID than the Initial DCID?\nNAME : Our assumption has been that load balancers, for instance, can simply route on the CID in the first header they see in the UDP datagram. If we allow different connections to be coalesced, that complicates things. It may also start to leak some things about the way load is distributed within the server network: if some CIDs when combined cause loss and some don't, that reveals some things about how the server load is distributed. Is there a strong argument for changing this behavior? This new language in was always the intent. I suspect it didn't change when we started allowing multiple CIDs within a connection. Unless there's an argument for changing behavior, this is really an editorial PR.\nNAME If I understand correctly, every time you change the connection ID, you change the connection ID for all packet number spaces. Assuming that my reading of the spec is correct in this regard, it seems like there's no compelling reason for making this change?\nNAME the spec actually says (7.2): The first flight of 0-RTT packets use the same Destination Connection ID and Source Connection ID values as the client's first Initial packet. Upon first receiving an Initial or Retry packet from the server, the client uses the Source Connection ID supplied by the server as the Destination Connection ID for subsequent packets, including any 0-RTT packets. SO you are correct, there is no ambiguity.\nNAME Thank you for looking this up. Then I'm opposed to making any change here. Not only is it unnecessary (that alone should be enough), it would also make it more difficult to evolve the way we deal with coalesced packets in the future: I can imagine a future QUIC version introducing a way to authenticate the entire coalesced packet, while at the same time compressing the header. This will be easier if we keep the explicit property that all connection IDs of the coalesced packets match.\nNAME I'm not sure if I agree. Coalescing of QUIC packets is a feature specific to v1, entities that operate without the knowledge of the connection (e.g., load balancers, datagram dispatchers) are not expected to consult the CID values of QUIC packets that are appended. Therefore, what we are talking here does not affect what we can do in the future.\nThat argument was based on the thought that it will be easier in a future QUIC version because the diff to v1 would be smaller. My main point is that this change is unnecessary because when endpoints change connection IDs, they change it for all packer types at the same time.\nNAME Thank you for the clarification. I think I do not agree to this, too. IIUC, the consensus we reached with was that the CID of a Handshake packet and that of the 1-RTT packet can be different. While it is true that the initial DCID set by the client is expected to change when the client receives the first \"active\" connection ID supplied by the server, the fact does not mean that the CIDs of all QUIC packets within a datagram has to be the same in all the cases.\nI think we need to make a change, in one direction or the other. It's unfortunate to mix CIDs, since it clearly links them to any observer, but doesn't seem like a big deal. For some implementations, it sounds like it's a big deal to avoid mixing them. Therefore, while I'm content with either resolution, it seems like it makes more sense to resolve toward being more permissive; e.g. . Mixing packets from different connections seems like a generally bad idea, since it assumes an implementation structure on the far end for it to work. The more likely outcome is that coalesced packets from other connections will simply be dropped as invalid. (Of course, this very property could be used for padding / misleading observers.)\nHi Mike, Can you say more about why you don't see this as an issue? Is it because this use case is most likely during the handshake? Or is there some other linkability already present that makes this less of an issue? regards, Ted\nSome implementations try to change the CID as soon as possible, to avoid linking 1RTT traffic with the CID visible during the handshake and to improve privacy. There will be a brief window when they still need to send both some handshake packets and some 1RTT packets. But coalescing a Handshake packet with the old CID and a 1RTT packet with the new CID links old and new CID and thus negates the effort. And just in case someone asks, no, you don't really need to decrypt the packets to access the second CID. You just need the coalesced packet and another 1RTT packet. Not hard.\nNAME If that is the concern, wouldn't it be better for endpoints to migrate to a new CID after learning that the peer has dropped the Handshake keys? If an endpoint switches to the new CID before the handshake keys are dropped by the peer, there's chance that the peer might use a new CID in the Handshake packets that the peer sends. That at least leaks when the connection was established for that CID.\nNAME yes, the simplest way to not run into coalescing problems is to not start migrations or CID changes before Handshake Done.\nApologies, I'm getting lost in the weeds here. I agree with NAME that we need to make a change, and that is really meant to fix the text to what the intent always was. NAME NAME NAME Are we all agreeing on this point?\nNAME That is my view. I think to be an editorial amendment of . Regarding linkability problem, the simple and safe way of mitigating the problem (as proposed in URL) does not interfere with the proposed resolution.\nNAME I personally prefer the old version, force single CID in coalesced packet. Loosening the text to allow multiple CID requires a code change in my implementation.\nI'm with NAME Single CID.\nBoth, actually. This occurs during the handshake, when addresses are required to be stable. Unless you're running a large number of parallel connections, all CIDs used on that 4-tuple during the handshake belong to the same connection with very high probability. You can break that linkage by deliberately changing CID and port at the same time, but you can't do that until the handshake is confirmed. If you want the linkage broken, you need to do that jump post-handshake anyway.\nHi Mike, I think \"high probability\" is probably app-dependent, because there may be cases where there are significant numbers of parallel connections. In the case of proxied queries, for example, the use of the same 4-tuple but a different CID could turn out to be common, because the proxy may prefer to keep traffic from its different clients separate even if they could be served by the same upstream server. A DNS example: if you have an DNS resolver talking to an upstream proxy or a common authoritative server, that resolver may want to set a different Client Subnet for different query streams, so that input can be used to determine the preferred reply. In the presence of a NAT or CGNAT, there could be interesting interaction with the dwell timers in the state table of the middlebox. If you change it too late, the middlebox may recycle the most recently freed port, which could result in you getting the same 4-tuple to an observer on that part of the path. Speaking strictly as an individual, I think using a single CID per coalesced packet as Christian prefers is cleaner and leaves open app behaviors we may not have yet identified. regards Ted I'm not a big fan of Client Subnet, and these particular parallel connections may or may not pose a risk; it's just a proxy situation where the parallelism might occur.\nThe discussion is active, I'm marking this as design.\nNAME I might be missing something, but if an endpoint wants to cut linkability between the packets used during the handshake and the packets used in 1-RTT, that endpoint has to switch to a new CID after the peer has dropped its Initial and Handshake keys, regardless of the outcome of this issue. That is because if the endpoint switches to a new CID while the peer is still sending Initial or Handshake packets, we allow the peer to use that new CID in the long header packets sent in response. Therefore, I do not think that requiring endpoints to use one DCID in all the coalesced QUIC packets help us in practice.\nIn my implementation, ngtcp2, it processes packet in the way that PR proposes; which means I just over looked existing text, but anyway, I think that processing a packet atomically is cleaner approach in implementation wise rather than passing a some kind of pointer to the very first packet header.\nNAME Once a non-coalesced packet is passed to a connection, in my implementation I don't have to do any checking. The server already makes sure to pass packets to the correct connection based on the DCID. So there's additional validation logic required for coalesced packets. And the easiest way is to just check that all DCIDs match. Sure, it would be possible to somehow query which other DCIDs would also be valid for the same session, but it's more complicated. And given that nobody has come up with any good reason to coalesce packets with different DCIDs in the first place, I'm wondering how this additional complexitiy would be justified.\nI don't foresee a large problem with the proposed change, but I also don't think it's necessary.\nDo we have a proposed resolution for this one?\nI think we settled on requiring connection IDs to be the same, but I haven't had the time to write that up, so not just yet.\nNAME FTR, discussion happened on the mailing list . My sense is that it's still not clear to some (including me) why the current fix in (requiring \"connections\" to be the same) doesn't work, but nobody is arguing strongly against what NAME said above (requiring \"connection IDs\" to be the same). So that might be the least common denominator.\nadded the text \" Connection IDs that are issued and not retired are considered active; any active connection ID is valid for use at any time, in any packet type. \" As far as I can tell there is no text specifying that an issued connection ID cannot be used with subsequent connections. The introduction to connection IDs state that there must not be correlation across connections, but if an issued connection ID was never consumed, there would be no correlation because it was transmitted encrypted.\nWe're currently not specifying which DCID is supposed to be used on retransmissions of Handshake packets. A client might receive a NEWCONNECTIONID frame in the first couple of 1-RTT packets it receives, before receiving either a HANDSHAKEDONE or an acknowledgement for a 1-RTT packet is sent itself. Therefore, it's still running loss recovery on the Handshake packets it sent out earlier. If there's an acknowledgement outstanding for one of those Handshake packets, the client will have to retransmit that packet. Which DCID is it supposed to use on the retransmission? Is it the DCID that was used on the original packet? Or is it the CID provided in the NEWCONNECTIONID frame? Note that the NEWCONNECTION_ID frame might already have requested the retiring of the CID used during the Handshake.\nAs I can see, there are three options: a) let the receiver of NCID to send Handshake packets using the new CIDs b) require (or recommend) receiver of NCID to continue using the original DCID until the handshake is confirmed c) require (or recommend) the sender to withhold the emission NCID (with a non-zero RTP) to until it sees an ACK for handshake confirmation I prefer (c) (or maybe (b)), because (a) has corner cases even though it might seem easy. Consider the case where a server sends TP.preferredaddress then as the first packet, sends the NCID frame with RTP set to 2. If we adopt (a), this would mean that the client would have only one CID that it can use (the one being embedded in the NCID frame) even though it needs two (one for the original path, and another for the path specified by TP.preferredaddress). As you can see, the design of the NCID has been based on the assumption that providing one new CID alongside a retirement request guarantees that the receiver would have enough active CIDs. But in this particular case, that is not the case. Based on this observation, and based on my assumption that a non-zero RTP would be sent by only some of the endpoints (the necessity of CID retirement is questionable for HTTP/3, the scope of QUIC v1, as the HTTP connections are typcially short-lived and can most likely be gracefully terminated to induce a new connection when retirement of CIDs is necessary), my preference goes to (c): an endpoint that uses RTP should cover the cost.\nNAME (I think you mean RPT, not RTP) Thank you for the analysis. I prefer (c) as well, except that I do not want to condition it on non-zero RPT, to keep things simpler. Otherwise we're still left with what to do at the receiver. Ensuring that the handshake confirmation is acked means that there are no HS packets pending.\nI'll also note that this issue was found through a resilience test we've built for the interop runner -- high loss handshake test. We're excited to see how everyone's handshake machinery performs under 50% loss :-)\nAt the time of this writing, ngtcp2 does (b).\nNAME for you problem with (a) isn't that just a general problem with having multiple paths and then getting a NEWCONNECTIONID with RPT for all previous CIDs? You end up in a scenario where you only have 1 CID (the new one) and multiple paths that just had all their CIDs retired. If you can get away with using that one CID for the active path, you can still make forward progress. If not, the connection dies.\nI believe we specify (b) already: \"A client MUST only change the value it sends in the Destination Connection ID in response to the first packet of each type it receives from the server (Retry or Initial); a server MUST set its value based on the Initial packet. Any additional changes are not permitted; if subsequent packets of those types include a different Source Connection ID, they MUST be discarded. This avoids problems that might arise from stateless processing of multiple Initial packets producing different connection IDs.\" URL\nNAME that is just indicating when a client should change DCID in response to receiving a Initial/Retry packet with a new CID. I don't think anything restricts either side from sending NEWCONNECTIONID frames in their first 1-RTT packets and their peers immediately using them for any packets (including handshake) they send. Personally, I don't have a problem with (a). I think that no matter what receivers will have to handle the multiple paths but single CID scenario, so I would prefer not to special case the handshake. I also don't think we have handshake confirmation problems because an endpoint should only send a NEWCONNECTIONID out when it's able to accept it. Then, if the peer can decrypt the packet with the new CID, I see no reason why it shouldn't be allowed to use it immediately.\nNAME my reading of the text I quoted is that you cannot use any DCIDs in long header packets that weren't either the original client DCID/SCID or from a server Initial or Retry.\nNAME There's a fourth option: d) require that the sender withholds the NCID frame until it has confirmed the handshake. As soon as an endpoint confirms the handshake, it drops the Handshake keys (I'm assuming here). As a result, the endpoint obviously doesn't care about Handshake packets any longer (and especially doesn't care which DCID was used to send the packet). Therefore, there's no problem if the receiver of the NCID frame immediately retires the connection ID used during the handshake, even if the the NCID and the HANDSHAKE_DONE frame were reordered.\nNAME The part about what to do with DCID changes is underspecified. From the part of the spec you quote -- \"Any additional changes are not permitted; if subsequent packets of those types include a different Source Connection ID, they MUST be discarded.\" -- it's not clear what these additional changes are limited to. I think the intent was for all long-form packets, but that's certainly not clear here. In any case, the problem with that approach is that the client will have to carefully construct HS packets with the old DCID while sending everything else with the new DCID.\nNAME : (d) works too, but an endpoint still has to decide which DCID to use for sending HS packets in the case that the HANDSHAKE_DONE frame is lost. I'd rather not leave that unspecified.\n\"of those types\" => Retry or Initial, since that's what it was talking about. That's trying to avoid the case where the server statelessly generates a new Retry (or Initial, in the rare case that would be stateless) with a second SCID upon receiving a retransmission or duplication of the client's Initial. But it doesn't say anything about the server issuing new CIDs via an NCID frame in 1-RTT before the handshake is confirmed. I think (a) or (b) would be fine; the server is clearly intending to accept the other CID if it issued it, but could conceivably use separate logic for long-header versus short-header routing.\nCan someone explain why using the new connection ID is a problem?\nNAME I think that that is a good way at looking into the issue. I agree that the endpoints can start using the new CIDs during handshake, if the peer sends NCID frames. The only problem that I can think of is related to retirement (see URL). When the server provides TP.preferred_address, and also decides to retire the CIDs immediately after sending all the handshake transcript, there's going to be a very small time window in which it has to send two CIDs (one for the current path and another for the path using the preferred address). Unless the QUIC stack allows bundling of two NCID frames when building a QUIC packet, the client might end up not having sufficient amount of CIDs, in which case it might terminate the connection. It is my view that this is very different from the ordinary case of retiring CIDs, in which case there would be enough time window to guarantee that a handful number of new generation CIDs are delivered before the older ones are delivered (cc NAME That said, I now tend to think that the most we might have to do is the following two: clarify that CIDs received using NCID frames can be used for Handshake packets have a cautionary text stating that a server needs to make sure that the client has enough number of new generation CIDs when it retires an older generation of CIDs\nNAME Wouldn't we be able to solve this by using my proposed option d?\nArtificial constraints on when the frame can be sent would be terrible to enforce and would introduce a performance cost for using NEWCONNECTIONID. In a great many cases, the 0.5RTT from the server is idle, which makes it a great way to get things like NEWCONNECTIONID sent without taking capacity from real work. Asking the server to defer sending would mean that NEWCONNECTIONID could compete with HTTP responses and the like. I see no reason not to allow use of connection IDs when they are available. They apply to the connection as a whole. You will note that we allow changing them with the Initial/Retry mini-protocol in ways that don't correlate with other connection-level events, so this isn't any different in that regard. Forcing the connection ID state to synchronize with other state changes seems more likely to complicate things than help. The caution NAME mentions is always true. Whenever an endpoint is forced to retire connection IDs, a short supply is always a risk to the connection.\nI think QUIC allows more than 1 NCID frames in a single QUIC packet. Now my preference is (a). If server utilizes retire-prior-to, it should know the consequences and should provide sufficient backup connection IDs. Otherwise, it would be misconfiguration of server software.\nNAME The problem is that the client might end up receiving only some of the NCID frames, unless the server always sends (and retransmits) the set of NCID frames as a whole. But anyways, I agree that (a) is probably fine. That is because it is trivial to design a server that never requests CID retirement during the handshake. All you need to do is start issuing the new generation of CIDs at least N seconds before sending NCID frames that asks for the retirement of the previous generation, where N is your handshake timeout. Knowing that this design is why I initially preferred (c), not realizing that it works equally well with (a). But know that I understand that, I prefer (a) as it is simpler.\nGiven a server has to sent NCID to have the CID change, I'm fine with (a), but I thought it was not what we'd previously agreed to and specified, so I believed (b) was the status quo.\nI don't think this is a serious problem, because if the server doesn't want the long header DCID to change, it can wait before sending NEWCONNECTIONID. That said, we do have an issue here in that different readers of the draft had different interpretations of what was allowed. All that to say I don't much care if this is allowed or prohibited, but we should at least make an editorial change to clarify things.\nI agree with NAME here. Any alternative is ok, but the draft should say whether servers can expect that the client might switch DCID and whether the clients can expect the server to send NCID w/ RPT>0 before the handshake is complete.\nLet's keep it simple: no migration before Handshake Done, period. That would include prohibit NCID in 0-RTT Also state that implementations MAY drop any packet received with NCID different from original before Handshake Done.\nNAME The problem is not that simple. Consider the case where the server receives ClientFinished and sends two 1-RTT packets, the first packet containing HANDSHAKE_DONE, and the second packet containing an NCID with RPT set to 1. If the first 1-RTT packet gets lost, the client receives a request to retire the connection ID before it confirms the handshake.\nI think I'd prefer a design that meets the two requirements stated below: A server should not be required to withhold NCID frames until it receives an ACK for HANDSHAKEDONE. A client should be allowed to use the original CID until the handshake is confirmed. The intention is to minimize additional complexity to existing (and future) QUIC stacks. One solution that meets these requirements is to state that a NCID frame confirms the handshake, exactly the same way as HANDSHAKEDONE frame does. That would be a trivial change to any of the QUIC stacks, and there would be no ambiguity regarding the state changes.\nDiscussed in ZRH. Proposed resolution is to clarify that issuing a CID means being prepared to seeing it used at any time. Additional Editorial issue on packetization issues.\nEditorial PR on packetization challenges with retransmitted frames is .\nUgh, editorial changes for a design issue. Reopened for consensus calling.\nThis was fixed by . is related to this issue, but that simply clarifies things. To keep things sane, let's just close this when is merged.\nLooks good. URL discovered a minor \"discrepancy\" in our requirements for coalescing packets. We required that: senders not coalesce packets for different connections in the same datagram. receivers reject packets with different connection IDs in the same datagram. URL was the result of a discussion where I was convinced that looser constraints here weren't harmful. The requirements would identify connections, not connection IDs. Originally, my inclination was to fix the requirement on the sender to use connection IDs, which would remove some linkability. But then I was reminded that linkability is already defeated for several reasons. You can't coalesce outside of the handshake and we require a stable path for the handshake AND long headers include a source connection ID that can be used for linkability. So the linkability concern is basically lost. We also learned that some implementations were generating packets while processing incoming packets. Later they would coalesce these into a datagram that might then have different connection IDs on the packets it contains. There has been some opposition to the proposed resolution in PR 3870. Apparently, for some, having multiple connection IDs in the same datagram complicates processing. I don't understand this objection. It seems to me more difficult to retain state across packets than it is to process each atomically. I was hoping that Christian or Nick can explain more about how this affects them. Marten also indicated that there was a reason not to allow this, but if I understand that, this is based on some potential for future optimization, which could be conditional on having consistent connection IDs. If we moved to a scheme that required consistent connection IDs, anyone using that scheme could be required to avoid coalescing different connection IDs. So this seems more of a theoretical concern. And - in anticipation of maybe choosing to make senders use a consistent connection ID - if you might be generating coalesced datagrams with different connection IDs in packets, how hard would it be to ensure that these are consistent? Lars and Ian, I think both of you indicated that you might do this."} {"_id":"q-en-quicwg-base-drafts-c17687922bfc24d60b84f3f65f63b7e3f1ea86a81076d6d3dc7a7bba8c2f011f","text":"From an interop discussion with NAME and NAME HTTP/1.1 the presence of a Host header: If no Host header is present, the section on describes how you determine what host is being asked for, ending with \"guess\" if it's an HTTP/1.0 request. HTTP/2 says that you generate an pseudo-header if sending direct-to-H2 and carry through the Host header if you're relaying HTTP/1.1. There is no discussion about what to do with an HTTP/1.0 (or 0.9) request being relayed over HTTP/2, which wouldn't have a Host header; you're neither required to generate or send one, even an empty one, nor are you given guidance about how to figure it out. HTTP/3 carries forward the H2 language (currently by reference, and explicitly after ). Do we want to have some guidance here? Or does this issue instead belong in URL to move some of the existing guidance to be HTTP version-independent? (NAME\nNAME Thank you for looking into the problem. Regarding if (or how) we should handle the issue, I think my weak preference goes to having one rule that applies to both HTTP/2 and HTTP/3. That would be less surprising to the users, and would also be helpful to implementations. It could well be the case for an HTTP server to have a common logic of handing pseudo headers (or lack of) between HTTP/2 and HTTP/3.\nHost is an HTTP/1.x header field (it was just late to the party), so HTTP/1.0 includes it as an extension. And that requirement is specific to HTTP/1.1, so would theoretically not apply to HTTP/1.2 (and that was intended): it was a short term requirement imposed by the IESG because they wanted to require deployment and didn't understand that it was already deployed. In general, I don't understand why \":authority\" exists (as opposed to always using \":host\"), but I think the HTTP requirement is simply that the identifying data must be present somewhere in the request.\n\":authority\" exists for the purpose of always transporting the equivalent of the , with \":scheme\" and \":path\" transporting the rest. It's mostly a consistency thing, I think. If you have a \"Host\" header instead, that's acceptable, too. So should we say that the target authority MUST be present in either \"Host\" or \":authority\", or do we leave it alone?\nWell, assuming we still allow requests for URIs that do not have an authority component (e.g, a URN or similar), I think it is better to just leave it alone. OTOH, folks are still going to ask what needs to be sent when the request target doesn't have an authority component, so that ought to be defined somewhere here more sensibly than it was for 1.1.\nDiscussed in ZRH. Proposed resolution is that if a URI scheme has a mandatory component, it better have one of those two things, preferably a :authority. Alert httpbis.\nDoesn't this beg the question \"what if the request is for a URI that does not have a mandatory authority component?\" All of the other cases are described. I would add something like: Thanks!"} {"_id":"q-en-quicwg-base-drafts-4022537d9fb7ed8892c510c58cdc70411f7c4199e5527ad88e5c8afa0f5c8fcf","text":"A sender can intentionally skip a packet number, causing the receiver to treat a packet as \"Out-of-order\". From NAME comment on"} {"_id":"q-en-quicwg-base-drafts-ca4ca8250a0b0ea64fe9361466c7bfccde12c9b17124c9c8ba2919cfecd62449","text":"Addresses . I'm flexible on wording and where exactly it lives in the document. I put it in Security Considerations.\nI just completely rearranged the new section based on your input. See how it reads now. Editors, I don't seem to have permission to request review. Can someone add NAME\nUnfortunately, some of the reviews are hidden as obsolete. I want to respond to this by NAME about disablemigration I tried to not recommend anything here except to send disablemigration. The client is going to decide what to do with regard to NAT rebinding. If wired, I probably send more PINGs or something. If wireless, maybe not. If I'm quite sure I'm not behind a NAT, I probably do nothing. There's nothing normative here but to send the TP, so if people hate this I can just delete the explanation.\nI moved the whole thing to Section 5 as NAME suggested. I've taken most of the other suggestions, but am not sure if the intro needs to change, or what exactly people don't like about the disable_migration explanatory text.\nI'm actually fine with this. We were considering keep alives on the server side when we can't support migration (or NAT rebinding) based on the current infrastructure. So having the spec suggest keep alives to the client would be better. It would allow the client to choose if it would prefer keep alives or just to close the connection.\nAfter taking all suggestions from NAME I simply deleted the explanatory text about sending disablemigration. I didn't see the stuff about NAT rebinding as prescriptive or changing the semantics, but it's non-normative, clients are gonna do what they gonna do, and it's not worth holding up the rest of this over. If people want to add NAT rebinding considerations to disablemigration we can fight that out in another issue.\nFWIW, we have a number of cases when the destination IP and port + CID would route correctly, but the original IP(being anycast) would not always route correctly. It's likely there are millions of connections to the destination IP being migrated to, but yes it's a smaller number than the anycast IP.\nSo NAME do you want new or revised language here, or will the other issue address the linkability stuff? I would like to close this out if it's good enough.\nGiven how hard this has been to get right, I want NAME to take a look. But I'm happy with this.\nA reminder for NAME to either take a look or say that this is OK to proceed without another review.\nI could not stand leaving this unmerged any longer. Anyone with more comments can make a new PR. Thanks to everyone who helped out here.\nThe transport draft talks about QUIC being resilient to NAT rebindings. However, if a QUIC server is behind an L3 load balancer which simply routes based on 5-tuple, then connections to this server will (likely) not survive rebinding. I couldn't find any language which addressed whether such a deployment was \"OK\" or not. I think that since this load balance does not support NAT binding resilience, it is implicitly \"bad\" according to the draft, but others might disagree. In any case, I think there should be text to address this. Note, if the server advertised a preferredaddress which routed around the load balancer, and if clients were required to use this address, then that could obviously work. But preferredaddress support is a SHOULD, not a MUST.\nWe did discuss this issue. Use of empty connection IDs and demultiplexing based on source addresses. The conclusion there was that this was OK: people could do that, as long as it was clear that what they were getting was not different than TCP. That is, we made it clear that if you don't use the identifiers that you control (your addresses, the connection ID), then you accept that you can't handle migration at all and connections will drop (well, unless you do something like trial decryption, which has some obvious scaling issues). The outcome of that discussion was captured in . Assuming that Ryan finds this answer satisfactory (and I haven't missed anything), then I suggest we just close this as a duplicate of .\nI don't think this captures Ryan's concern: when the server is using a non-zero CID and also is behind 4-tuple routing. I think he's looking for a section of text to coherently describe what you have to do here: send either disablemigration or preferredaddress; or forward packets between servers; or either don't use a common stateless_reset key, or put client address/port in the reset token. This is sort of ops-drafty but there real requirements on servers to behave in a secure way. I think someone with full command of the transport draft would figure this out, but it is not clearly stated anywhere.\nI agree with Martin Duke and the language he suggests makes sense to me.\nThis does feel a bit ops-drafty to me, but I'd be happy to review a PR.\nI'm sure that this is now basically unrecognizable...Some more suggestions, but overall this LG"} {"_id":"q-en-quicwg-base-drafts-ff01d0a590d684df919599b2ddf07c94e07c8fd115d3e45cf2e9ba4dfa20bc6f","text":"This is attempt to write down NAME 's proposal to This was surprisingly hard to write, and undoubtedly could be improved. There are cleaner ways to solve this problem, but this one is lowest-impact to the existing spec.\nNAME thanks for the new text, that helped. I think we've resolved outstanding issues, unless my new text has created new problems. 2 possible ways to enlarge this solution: should we describe this attack in Security Considerations? As these MAYs are an effort to counter something in particular, perhaps we should discuss. it would clean up some accounting issues if each NCID frame MUST set Retire Prior To to more than the maximum continuous sequence number that it had recieved an RCID for. Worthwhile?\nThat's interesting to send at first glance, but then poses the question of what the recipient of NCID should do when it sees an RPT value change. There's a difference in behavior between \"I've seen these\" and \"you need to send these.\"\nI think this PR is sufficient to both document the issue and provide some minimum guidance, so I'd suggest we editorialize a bit and then move forward with a solution along these lines.\nIt seems all the discussion of the problem is happening here and not on the issue, so I will propose my alternative solution here: I know folks have been resistant to changing state based off receiving an acknowledgement of a packet/frame they sent, but what if we changed how the RetirePriorTo works, and just required the packet containing the frame to be acknowledged, instead of requiring the RCID to be sent? Would this solve the problems? This would mean the receiver of the NCID shouldn't have to track any additional state beyond what it takes to acknowledge the packet (which they have to do anyways); it just immediately throws away the state. And the RCID packet would only be used when the endpoint that was given the CID chooses to retire it.\nNAME I think this PR has been an attempt to clarify the \"leaky\" approach, that tries to clarify the amount of state that an endpoint should retain. There are indeed discussion of what that amount is, but I think it is essential to have such detailed discussion on the PR in order to determine exactly what a particular approach would be like. To paraphrase, I think we are making process on this PR to describe a specific approach. If we are to consider something different, I think that discussion should happen on the issue (and emerge as a separate PR).\nOk, trying to put this one in the ground. I think the only remaining dispute is between NAME and NAME on whether activeconnectionid_limit is a sufficient number of RCIDs to remember.\nReading through this proposal, I fear that this would be very difficult to implement. If the ACK for a RETIRECONNECTIONID frame is lost, and endpoint cannot safely send a NEWCONNECTIONID frame with a RetirePriorTo, since this would cause the peer to have more than RETIRECONNECTION_ID frames in flight. In order to safely use RetirePriorTo, this endpoint then would have to track receipt of the ACK frame. I'd very much like to avoid the complexity associated with that.\nI finally caught up with this issue, and I think that implementer advice is my preferred way forward. To that end, I like this PR as the resolution.\nOK -- there has been an enormous amount of discussion. The current state of the PR is that all RCIDs will eventually be sent, but if there are more than activeconnectionid_limit, some may be delayed by an RTT or more. If it gets out of hand the RCID sender can throw a connection error. I believe this reflects what NAME articulated a few dozen messages ago, and in the thread above I have pseudocode for a lightweight implementation of this (i.e. essentially no additional resources for an arbitrarily large backlog of RCIDs). The advantage of this approach is that it is lightweight and is very subtle change to behavior that will not will not affect interoperability at all or break the existing contract of a Retire Prior To. There are two alternatives that have some traction: 1) Just bite the bullet and do . This breaks interoperability but is a clean architectural change that addresses the root of the problem. I would be very comfortable with this, and prefer it to other things that break the existing semantics. 2) NAME \"leaky\" approach that allows endpoints to simply not send some RCIDs at all if there are too many pending. I used to prefer this, but given my existence proof above I'm not as excited about it anymore. It breaks the contract that is in draft-27 instead of merely bending it. I hope that covers the decision space. I would like to put this issue out of its misery.\nI'm reluctant to blow away the whole PR with brand new text, so in this comment I will propose four entirely new paragraphs that IMO says what needs to be said. Either the current PR, Martin's revision, or this proposal could serve as the basis of the eventual change: If a peer sends large numbers of NEWCONNECTIONID frames that increase Retire Prior To, and/or acks of packets that contain RETIRECONNECTIONID are lost, the state required at the RETIRECONNECTIONID sender can grow without regard to its activeconnectionidlimit. Therefore, endpoints SHOULD take steps to bound the state associated with needed RETIRECONNECTIONID frames while ensuring that it eventually transmits all required RETIRECONNECTIONID frames. For example, it might limit voluntarily retirement of sequence numbers if it has not received enough acknowledgments of packets containing previous retirements. It might also restrict the RETIRECONNECTIONID frames in flight to a single packet in order to simplify tracking of what is in flight, what needs retransmission, and what has been acknowledged. An endpoint MAY treat having too many connection IDs to retire as a connection error of type CONNECTIONIDLIMITERROR. The threshold for this error SHOULD be at least twice the endpoint's advertised activeconnectionidlimit. Endpoints SHOULD NOT issue updates of the Retire Prior To field prior to receiving all of the RETIRECONNECTION_ID frames for the previous update to Retire Prior To. ** If people much prefer this or MT's revision, I'll update the PR accordingly.\nI took NAME 's suggested text. I would prefer a little advice on how to implement the lag, but I can live with this text.\nLGTM.\nWhen an endpoint sends a RETIRECONNECTIONID frame carrying a connection ID that has been active until that point, that endpoint is expected to open room for accepting a new connection ID. That means that if the peer does not acknowledge the packets that contains RETIRECONNECTIONID frames, but keeps on sending NEWCONNECTIONID frames with increasing values in the Retire Prior To field, the number of unacknowledged RETIRECONNECTIONID frames (or to be precise, the number of unacked retiring connection IDs) keeps on increasing.\nTo address the issue, I think we need to do one of the following: State that the attack exists, and that QUIC stacks should implement mitigations. Introduce MAXCONNECTIONIDS frame. The reason we do not have similar issue in stream concurrency control is because the credit is controlled by MAX_STREAMS frames, rather than using FIN (or reset) as an implicit signal to indicate availability of new credit. We could do the same for connection IDs.\nMathematically, this might be unbounded, but it's more likely that in practice the limit is as it would be unlikely that unacknowledged RETIRECONNECTIONID frames would remain so once more NEWCONNECTIONID frames start arriving. More so as the issuer of those NEWCONNECTIONID frames has to have received the RETIRECONNECTION_ID frames. Adding new frames for this seems a little unwieldy, so I'd prefer to just acknowledge that the problem exists. Note also that Retire Prior To has the effect of making this limit squishy in other ways. (This is a design issue.)\nI'm not sure I understand the attack here. Why would an implementation need to keep track of acknowledgements of RETIRECONNECTIONID frames (other than for purposes of loss recovery, but then this applies to any retransmittable frame type)? An endpoint sends a RETIRECONNECTIONID frame when it decides to not use the respective CID any longer. At this point, it can forget about the connection ID altogether. It might make sense to hold on to the stateless reset token for a while (3 PTO?) longer, in order to be able to detect stateless resets, but this is an optimization and wouldn't lead to unbounded state anyway.\nNAME I think is fine, assuming that the endpoint would just stop retransmitting oldest RETIRECONNECTIONID frames. That limit would be too stern to be used for detecting potential attacks (and therefore closing the connection with PROTOCOLVIOLATION).\nNAME This is about keeping state for retransmission for recovering loss. As stated in , the issue is unique to RETIRECONNECTIONID frames, because this frame type is (ab)used for providing the peer additional credit, compared to other flow control that uses dedicated frame types (e.g., MAX_*) for communicating credit.\nThank you for the explanation, NAME I think I now understood the problem. This would require endpoints to make assumptions about how the peer handles the and how it issues new CIDs. This can get complicated really quickly and leave the connection in an unrecoverable state (or at the least a state with less CIDs than it should have). Introducing a new frame, as NAME suggested, seems a lot cleaner to me.\nJust to be clear, while I used RPT as a way of explaining the design issue, I think it is not a requirement. A malicious client can repeatedly migrate to a new address, initiating the use of new CID pairs, at the same time intentionally not acknowledging packets containing RETIRECONNECTIONID frames sent from the server. If the client does that, the number of CIDs that the server has to track for retirement increases as time goes.\nClarification question: If the person sending RCID isn't getting ACKs back, I think we still need to assume that those CIDs are being replaced? If not, then the non-RPT side of this would just mean the person sending RCID frames runs out of CIDs to use and has to stop. So the problem statement is that: this person is running over their maxactivecid limit from the perspective of the person withholding ACKs, but each side has their own view and so the person retiring them stopped counting them as active once the frame was sent (but not acknowledged). And that can happen via RPT as well, just because it means that they're going to be instructed to send the RCID, but the problem is the same either way? (Just want to make sure I'm understanding the problem correctly)\nNAME Yes, I think your problem statement is correct. Or to be even more concise, the problem is that the issuer of CIDs is given additional credit when it receives RCID, which happens before the consumer of CIDs drops the state it needs to retain for retransmitting RCID. For the attack to work, an attacker has to let the peer consume and retire CIDs. RPT and intentional migration are examples that makes that happen.\nSo the issue here is fundamentally that you can induce the peer to send retransmittable frames which aren't subject to some kind of flow control, and this is the only instance we have of that. STOPSENDING and failing to acknowledge the RESETSTREAM appears to have a similar profile, except that the number of RESETSTREAMs you could have outstanding is bounded by the number of streams the peer has opened, which they control. You could execute the same attack at a larger scale by choosing particular regions of stream data to never acknowledge, forcing the peer to keep them in memory for retransmission, but the peer could stop sending until that region gets through (applying backpressure) or simply reset the stream. Since a RESETSTREAM is smaller than the data region and only one is needed per stream, this collapses into the previous case. What's different here is that the endpoint being attacked doesn't choose whether to retire these CIDs and has no mechanism to apply backpressure. It seems like there are a couple paths to resolving this. As you suggest, we could define an explicit frame that grants the additional CID space, and implementations would not send / could withhold this frame until the RCID is ACK'd. This creates a backpressure mechanism and has the side benefit of allowing an implementation to vary the number of CIDs it wants to track over time, but the disadvantage of requiring an extra round trip before CIDs can be replaced in the normal case. We could mimic the \"collapsing\" behavior above by defining a frame that retires all CIDs up to a given sequence number. Then the implementation under memory pressure chooses to retire a few extra CIDs and collapses the whole set waiting to be retransmitted to a single frame.\nNAME I think your summarization of the problem is accurate. Regarding an attacker selectively acking packets carrying STREAM frames, yes, the attack exists, but it the sender can mitigate that problem in certain ways. We already state that in . This issue is the only case that a protocol cannot be implemented properly, with a bounded amount of state. Regarding how we should fix the problem, if we are going make a protocol change, my preference goes to applying the same design principle that we have in other places (i.e. introduce MAXCONNECTIONSIDS frame). I do not like designing a mechanism specific for this purpose, as there is a chance that we might make mistakes. I can also live with what NAME proposed in URL (i.e. encourage leaky behavior).\nSimilar to NAME I'd prefer to acknowledge the problem and indicate that peers may want to limit the number of unacknowledged RETIRECONNECTIONID frames to the number of connection IDs they support. The other option I can imagine is to make the RETIRECONNECTIONID frame cumulative, which provides a strong incentive to use connection IDs in order. Though we may have to make an exception for the connection ID in the received packet if we did that.\nSo the bounds on this aren't that terrible. Say you had to retire 1000 connection IDs all at once. You could track each one individually as you send RETIRECONNECTIONID. That's 1000 things to track, which might be more state than you want (it's not 1000 bits because you need to maintain relationships with packet numbers). Or you could just keep strictly limited, no matter how many connection IDs you need to retire. Tracking that, plus what you have been requested to retire should bound the state required to implement retirement. In other words, you do control how much state you commit to retirement, just like everything else.\nNAME That's my preferred solution, but in my reading of the spec this is not allowed. We MUST send RCID for each sequence number for which the peer has sent retire-prior-to. I think the language in Section 5.1.2 needs to change.\nNAME idea of allowing RCID to be cumulative is also good and computationally cleaner, although we'd need a different frame type or an additional field in the frame (I don't think we can get rid of retiring non-continuous sequence numbers)\nI'm happy to file a PR for the NAME or NAME approach based on the overall sentiment.\nOne purely implementation option: It's always legitimate to repeat a frame. Therefore, if you track only the smallest and largest un-acked RCID sequence numbers, you can just keep retiring the whole range every time you retransmit. ACKs will shrink the range (hopefully to zero). There's no harm beyond unnecessary bytes in retransmitting something a few extra times to avoid excessive state tracking.\nNAME While I won't say that's impossible, I'd argue that the design is going to be complex. When CIDs are used in a legitimate manner, they are not going to be retired one by one. A CID is retired when a path using that CID is abandoned. Consider the case where an endpoint has 4 active CIDs (CID0 to CID3), and two paths. One path could be using CID0, the other path could be using CID3. When the endpoint abandon the path that is using CID3, it has to retire CID3 but it cannot retire others. NAME The amount of CIDs that can be sent on wire at a time is restricted by the CWND, therefore there is no guarantee that you can send RCID frames for all CIDs that you want to retire at once. In other words, I think the approach at best collapses into what NAME has proposed (i.e. leaky behavior).\nAt this point, I also think NAME solution is the best approach, since it is a very minor change, and changing RETIRECONNECTIONID to be cumulative is a larger change than I want to accept at this point. NAME thanks for your examples.\nOne idea: Can we simplify everything by not requiring sending RETIRECONNECTIONID for CIDs less than the Retire Prior To? RETIRECONNECTIONID would then only be used when a CID was used and needed to be retired so more CIDs could be issued to the user of the CID.\nNAME see my comment in the PR. More disruptive to current implementations, and eliminates a useful signal. The current proposal will not matter for well-behaved clients but will protect servers from some attacks.\nWe haven't implemented this yet, so it's not disruptive to our implementation FWIW I wonder how useful the signal is. Retire Prior To is really saying \"These are going away sometime soon, whether you like it or not.\" to the peer. If an implementation never sent RETIRECONNECTIONID, would that create a problem of some sort, because I suspect someone will do just that.\nNAME As stated in URL, this attack can be mounted without using Retire Prior To.\nNAME in this case isn't the client limited by activeconnectionid_limit? As a server, if the client has in no way indicated it has accepted my retirement, I'm not going to accept new CIDs. The NCID/RPT case is different because the spec specifically allows the client to NCID speculatively assuming the server will immediately retire the CID.\nNAME The attack here is that when a malicious client retires a CID to which the server has responded, that server would retire the CID that it has used on that path, and also provide a new CID to the client. But the client never ACKs the RCID frames that the server sends. As an example, consider the case where a client uses sCID1 on a new path, the server responds on that path using cCID1, the client abandons that path and sends RCID (seq=sCID1). When receiving this RCID frame, the server would send RCID (seq=cCID1), and also send NCID(sCID2). The client intentionally does not ack the packet carrying these frames, but uses sCID2 on a new path, that carries NCID(cCID2), repeating this procedure.\nAh, I see now that my defense doesn't work, because if I reject the NCID I have to kill the connection, and it may just be because of unlucky ack losses. This whole section is not very precisely written and suffers from two-generals problems about when a CID is actually retired. I will revise to account for this, but I'm not sure how.\nNAME I agree with NAME that there's no valid thread model without Retire Prior To. If the peer doesn't acknowledge your retirement, then you shouldn't be issuing more CIDs, because you might exceed their limit.\nIn my opinion, running out of CIDs is not a big crisis. You can always create a new connection with 0-RTT with a relatively small overhead. We should be biased towards simplicity rather than never running out of CIDs.\nNAME unfortunately we are cornered by the spec. Consider two well behaving endpoints. I'm fully stocked with CIDs and then retire one with a frame. The peer gets the RCID and issues an NCID, but the ack of RCID is lost. If I don't count the CID as retired, then I MUST close the connection. That's bad, so I have to treat the CID as retired if I sent RCID. If you drop acks at scale, this becomes an attack. I'm really coming around to your cumulative retire idea, but will have another go at this tomorrow.\nNAME I think the framing is incorrect. The peer can enforce an endpoint to retire a CID (e.g., by changing RPT or discarding a path). As NAME points out, an endpoint is encouraged issue a new CID at this point, rather than withholding than until the ACK of RCID is received. Note also that RCID that an endpoint sends carries the CID that the peer has issued, while NCID that the endpoint carries the CID that that endpoint issues. Therefore, I am not sure what the rationale for withholding the emission of NCID until receiving an ACK for RCID would be. I agree. An endpoint is not expected to consume CID at high rate. We already discourage such behavior; see URL That's why I'm fine with the leaky behavior that NAME has suggested. NAME As previously stated, I am hesitant to have a new machinery that is unique to this purpose. There is a risk of creating something buggy, both in terms of specification and implementation. My preference goes to accepting the risk (and recommending mitigations, such as the leaky behavior), or reusing a design pattern that we already have (i.e. introduce MAXCONNECTIONIDS frame).\nThe cumulative idea is very appealing to me, minus a future with multipath. If two or more paths are active at once, I don't know how the peer would know that if the frame was cumulative. The only option would be to ensure the two(or more) CIDs in use are adjacent, which is possible, but may require switching CIDs more than should be necessary.\nNAME I would suggest a cumulative RCID as a supplement to the individual RCID. A cumulative one would cleanly handle a lot of cases where there is an unboundedly large list of sequence numbers to retire. If there is a low sequence number that is still in use, the sender can choose to retire it to enable use of the cumulative frame.\nI know folks have been resistant to changing state based off receiving an acknowledgement of a packet/frame they sent, but what if we changed how the RetirePriorTo works, and just required the packet containing the frame to be acknowledged, instead of requiring the RCID to be sent? Would this solve the problems? This would mean the receiver of the NCID shouldn't have to track any additional state beyond what it takes to acknowledge the packet (which they have to do anyways); it just immediately throws away the state. And the RCID packet would only be used when the endpoint that was given the CID chooses to retire it.\nFWIW, I have filed that fixes this issue without introducing a leaky behavior like . IMO, the observation is that the endpoints can check that the issuer of CIDs is not intentionally introducing gaps in the sequence numbers they generate. We already prohibit senders from introducing such gaps, detecting them on the receiver side would be a good thing to do. And if endpoints start doing that, using a cumulative RCID frame is a sensible solution, as the the cumulative list of retired CIDs at any time can be represented using a SACK-like structure (i.e., max-retired + small list of active CIDs below that max.).\nI know that you have characterized this as \"leaky\", but I'm not sure that I understand exactly what is being leaked. Nothing is lost in , except that the endpoint requesting retirement has to tolerate a potentially extended period without old connection IDs being retired. In case this helps, I have no big preference between and . There is a certain elegance to that might tip me toward that, but it is slightly more disruptive. I prefer either of those to the more disruptive change in , which also has the downside of strongly encouraging connection ID changes in order to preserve a clean separation between active and retired. doesn't force you to drop old connection IDs. However, the encoding of the frame is just too complicated for me to accept. I would prefer forcing a clean separation than having a complex encoding like this. is a non-starter for me for the reasons stated.\nNAME In the approach we take in , there is a possibility that the issuer of the CID might not receive the sequence numbers of all the CIDs that have been retired by the peer, when the issuer is issuing new CIDs at high rate and the receiver is also consuming them at a hight rate. That is the consequence of endpoints limiting the amount of inflight RCID frames it tracks. I think we can provide advice so that the \"leak\" would not be an issue in practice, and assuming that would happen, I am happy with being the solution. I think is cleaner, but I acknowledge that it is more disruptive than . Part of my intent behind creating is to see how a clean fix (in terms of state machinery) would look like, so that it can be compared against . As stated in URL, by itself only partly fixes the problem. I am not against adopting along with though. is also a partial fix, I also think that it is a non-starter.\nI think that I understand your point; thanks for your patience. I didn't really classify it as a leak though, so the choice of words threw me a little, it's more of a lag. Let's say that you have a pathological endpoint that sends an infinite series of NEWCONNECTIONID frames with Sequence Number N and Retire Prior To of (N-1). That's legal and will never violate any limits. But unless they maintain acknowledgments for RETIRECONNECTIONID frames at a similar rate (which, to be fair should be easy as for the same values), they could get far ahead of their peer. I don't think that it is strictly a leak, just a lag between the connection IDs being pushed and the connection IDs being successfully retired. Without or something else this could be problematic. And I now understand why is really just an orthogonal refinement, though it might make other defenses less likely to be required by making zero in cases where an endpoint isn't using discretionary retirement. The defense in is to make effectively constant, no matter how many frames need to be retired. My main objection is really that it is a maximal design where only a minimal one is warranted. It's probably worth pointing out that there is also the simplest defense: If you can't keep up with RETIRECONNECTIONID, stop talking to the peer that does this to you. This is just another example from that category of abuse we don't have specific protocol mechanisms to mitigate.\nI agree that I'd like to see a minimal solution, so I'm leery of heading towards . In order for a peer to need to send a lot of RETIRECONNECTIONID frames without Retire Prior To, a lot of 5-tuple + CID changes need to be coming in. One way to rate limit that naturally is to stop giving out CIDs if the peer hasn't acknowledged your RETIRECONNECTIONID frames. ie: Limit the number of NEWCONNECTIONID frames in flight to: So if the peer stops acknowledging your retirements, you stop giving them new CIDs. I think this avoids the need for any other limits? I added 1 in my example so one NCID could be sent even if all of the peer's CIDs had just been retired, though that concern disappears if we decided to also do something like\nWhat a mess we've made! Is there interest in doing a Zoom about this? Maybe 0900 Tokyo/1100 Sydney/1700 Seattle/2000 Boston on Tue/Wed? The root cause is that the flow control mechanism isn't all that well designed. IMO is the actual correct fix to this problem, but as people say, it's disruptive. uses ACKs as an implicit signal, which seems like a problem. has a different implicit signal, and I don't think it solves the case where the client is withholding acks for unsolicited RCIDs. I really think we should just use or depending on our appetite for changing the wire image. is great for the RCID sender, and if the receiver won't miss any RCIDs unless it's doing pathological stuff or the ack loss patterns are tremendously unlucky. , OTOH, is a comprehensive fix to the problem.\nI think with some added text about limiting outstanding NCIDs when there are lots of RCIDs would be sufficiently robust.\nI'm fine with trying to see if we can reach an agreement on .\nI'd prefer over . I can't really see why it would more disruptive than . The problem with is that it will likely work find in most cases, but there are some corner case (when the ACKs for RETIRECONNECTIONID frames are lost), in which things will break. Properly accounting for these corner cases requires some non-trivial tracking logic in order to avoid risking the connection being closed with a CONNECTIONIDLIMIT_ERROR. On the other hand, seems to be conceptually simpler. I'd rather implement a few changes to my frame parser than introduce some ACK-tracking logic.\nJust to give another example of how an endpoint might end up required to track the retirement of more than activeconnectionid_limit, when talking to a well-behaving peer: A server tries to supply three active CIDs to the peer. At the moment, it has provided CID0, CID1, CID2 to the client. The client sends RCID(CID1), RCID(CID2). The server provides NCID(CID3), NCID(CID4). The two NCID frames are delivered to the client, but the ACK for the RCID frames gets lost. The client decides to probe two paths concurrently, by using CID3 and CID4, but shortly after the packets are sent using those paths, the underlying network for those two paths disappear. The client now needs to make sure that RCID frames carrying 4 CIDs reach the peer. In this example, we might argue that it would be possible for the client to figure out that CID3 and CID4 were provided as substitutions for CID1 and CID2. But do we want to implement logic that detects such condition into our code? Note that tracking of substitutions can become tricky; if RCID(CID1) and RCID(CID2) were sent in different packets, it would be impossible for the client to see if CID4 was provided as a supplement for CID1 or for CID2.\nI agree with NAME that if we go with the limit should be higher than .\nI finally caught up with this issue, and implementer advice is definitely my preferred way forward. We're going to need some complexity to (and some magic numbers) no matter what, let's do this while limiting mechanism to a minimum. To that end, I agree with as a good resolution.\nWhile increasing the limit will certainly help, this introduces a really unpleasant flakiness in the protocol. QUIC, so far, has the property that no amount or pattern of packet loss will lead to a protocol violation (it might lead to a connection timeout if all packets are blackholed, but that's a different kind of error). This means that when running QUIC in production, screening for (transport-level) errors will be very useful for finding bugs in implementations. My fear regarding is that no matter how high we choose the limit, there will always be a loss pattern that will trigger a protocol violation.\nNAME Thank you for raising the concern, I think the recommended behavior when exceeding that limit is to stop sending RCID frames for some sequence numbers rather than triggering a protocol violation. That's actually what I had in mind hence called it \"leaky.\" The benefit of stopping retiring some sequence numbers is that the worst case becomes endpoints running out of CIDs - something that is expected to happen on ordinary connections too. Note also that in HTTP/3, reestablishing a connection in 0-RTT is always a viable option. To summarize, I think that the concern can be resolved for HTTP/3 by changing the recommended behavior from resetting the connection to stop sending RCIF frames for some sequence numbers.\nNAME I don't think not sending RCID frames is that easy. Tracking something that needs to be sent consumes comparable state to tracking it in flight. Or are you suggesting a different algorithm that indicates a RCID frame doesn't need to be sent? If the peer has to wait for one Retire Prior To to take effect before it's increased again, that leaves only the peer migrating really quickly and lost ACKs as the unbounded case. As NAME said, I think it's more likely a well-behaved peer would run out of CIDs in this case than any other failure mode. I previously suggested stopping sending out NCID frames when the RCID frame limit had been reached to force the issue, but that just changes who has to close the connection. FWIW, we already have sanity checks where our implementation closes the connection when a datastructure becomes too large. The limits are high enough that they're almost never hit unless there is a bug, but they've been helpful in finding bugs. I would advocate everyone have these. NAME You'll never get to 0 transport errors. Some implementations will have a bug, and some peers or servers will have faulty or corrupted memory. For example, in Google QUIC we found we received the ClientHello on a stream besides 1 surprisingly often.\nI think that the idea is to reduce the set of RCID frames needed to a range of sequence numbers. You only have to remember the few exceptions for sequence numbers that are in use, sequence numbers for which you have outstanding frames, and a range of sequence numbers that require retirement. As long as your discretionary use is sequential, this is bounded state. As mentioned, the risk is that lack of acknowledgment leads to potentially large delays in getting new connection IDs, but if you ever run short, abandon the connection. Either you have a pathological loss pattern or your peer is messing with you. In either case, it is probably not worth continuing.\nNAME Yes, this is one method of limiting the state that is being used for retaining sequence numbers that need to be sent. And it works even when discretionary use is not sequential, as the number of \"exceptions\" (i.e. gaps) is bounded by maxconnectionidlimit. The other assumption of using this method is that you would somehow bound the state that is used to track RCID frames being inflight. There are various ways of accomplishing that, e.g., have a limit on the number of RCID frames sent at once, track bunch of inflight RCID frames as one meta-frame. And it is correct that the outcome would be a \"lag\" in this method. The downside of this method is that it might be a disruptive change to existing implementations. It is essentially equivalent to what proposes, with the exception being that tracking RCID frames are difficult (as you track to send many). For existing implementations, it would be easier to adopt a \"leaky\" method, i.e.: let endpoints have a counter counting on the number of sequence numbers that are retired but are yet to be acknowledged when a CID is being retired, if the value of the counter is below a certain threshold (i.e. 2 maxconnectionidlimit), the counter is incremented, and a RCID frame carrying the sequence number of that CID is added to the outstanding queue. If the counter was not below the threshold, no RCID frame will be sent (leak!) when an RCID frame is deemed lost, it will be resubmitted into the outstanding queue when an RCID frame is acknowledged before deemed lost, the counter is decremented when an RCID frame is late-acked, we do nothing Assuming that the solution we prefer is the least-disruptive approach, I think we need to recommend the latter. If we have distaste in the latter and prefer the former, I think we'd better switch to , as it more cleanly reflects the design of the former. NAME I think the latter is not that a big change to existing implementations.\nYep, I think that is a good approach. Obviously, the range encoding is superior in terms of how much state it can hold, but a simple list is fine. And the effects of lag will be limited by the tolerance for retaining retired connection IDs, which seems proportional to me.\nNAME I doubt that's less impactful on existing implementations, certainly from an interoperability standpoint. Today endpoints should hold on to incoming CIDs until they are formally retired. Making them leaky can lead to trapped state. Sending them all, but with a delay, is no added state but solves this problem.\nAfter a lot of back-and-forth, I think that we've all generally converged on .\nStill LG, but I like ekr's suggestionLatest text looks good"} {"_id":"q-en-quicwg-base-drafts-1e222950496cda1e408548bad3afaeaec30712892ea4be395cb877eb42384ef6","text":"Behold\nNAME I think this is inches of getting landed\nThe full text of that section is \"TBD.\"\nHPACK covers the following topics: Probing Dynamic Table State Static Huffman Encoding Memory Consumption/DoS Implementation Limits To what degree to we need to retread this vs referring to 7541? There are some new areas to cover related to DoS, like recommending implementation timeouts for blocking requests. Soliciting any other topics, and possibly a passionate security-minded volunteer to draft a PR.\nAll looks good (and familiar); some minor nits."} {"_id":"q-en-quicwg-base-drafts-8e08801cd7ff274aef0a386d56f6163c57ae115e34acca04bcce4a81943a8848","text":"This explains what needs to be kept and why. Specifically, you need to keep ranges unless you have other means of ensuring that you don't accept packets from those ranges again. You also need to keep the largest acknowledged so that you can get a packet number from subsequent packets. This also recommends that ACK frames include the largest acknowledged always. That is primarily to ensure that ECN works properly, and even there, you only disable ECN if you get some weird reordering, so it's probably not a big deal if you don't follow this recommendation. The issue is marked design, but the resolution here is basically a restatement of other text. I think we can run this through the design process, but it isn't really worth flagging this in a change log in my opinion.\nNAME you should probably look at this. I have marked the associated issue anyway, but I'd appreciate another review.\nIn , NAME notes that added this \"MUST\": My understanding is that this prevents the largest acknowledged packet from going backwards. We use the monotonic increase of the Largest Acknowledged field in two ways: In ECN validation, this is used to filter out ACK frames that might have arrived out of order, which might have lower counts legitimately. In key updates, where the value of the largest acknowledged is used to drive the change to new keys and to prevent use of old keys after an update. In my view, having ECN validation fail is quite unfortunate, but likely workable. On the other hand, uncertainty about largest acknowledged could introduce instability in the key update process. If an endpoint's conception of what the value is changes, I doubt that we'll see extra key updates (I think that the AEAD would protect us from that), but we might leave open the possibility of interleaving of packets from two different key phases. I would suggest that we retain this \"MUST\".\nFor out of order ACK frames, you can use the packet number the ACK arrives in. I realize that we have tried to allow frames to be retransmitted, but I think retransmitting ACK frames without modifying the ACK delay/etc is so harmful we should probably prohibit that one frame from being retransmitted, since it creates a bunch of edge cases. Or we could let ECN validation could fail, but that'd be a bit sad. I'm confused about what the issue with key updates is? The largest acked you've ever seen doesn't change, just the largest acked from the last ACK frame. When determining the size of the packet number, we say: \"The sender MUST use a packet number size able to represent more than twice as large a range than the difference between the largest acknowledged packet and packet number being sent.\" I'd rather not make this a MUST unless we have a good reason. There are valid use cases that become more complex with this constraint.\nI really want to understand the use cases that you think that this might break. Because key updates very much depend on tracking largest acknowledged. The key updates case has a fairly simple example. Say you receive packet 10 in key phase 1. Then 11 in key phase 2. That's valid. But if you ACK those and then only remember up to packet 8, packet 9 can use key phase 2 and you won't know that it is invalid.\nI'd like to suggest tracking largest acknowledged from forcing all ACK frames to contain the largest acknowledged and have that value never decrease. Given you need to track largest acknowledged locally(ie: not only in the in-flight ACK frame), I don't think there are any problems there. I have two cases in mind where the largest acked could validly decrease: 1) A receiver that tracks a limited number of ACK blocks and the oldest block happens to be the one containing largest acked. 2) If an application was reordering tolerant(ie: from URL) then two different senders on a single 'connection' may be sending packets with disparate packet numbers, causing what appears to be very large scale reordering. In this case, it'd be valuable to discard the older ACK ranges even if they contained the largest acked. Maybe the first is an edge case we don't care about in a use case we aren't optimizing for and the latter is solvable with careful issuing of packet number ranges. Even so, I'm not sure what the motivation is for this change? I guess the ECN case could be, but using largest acked as a proxy for retransmitted ACKs is a very bad design in my opinion.\nI don't think we really need to cater for retransmitted ACK frames in any way. What I'm concerned about is reordering. Yes, using the increase of largest acknowledged as a proxy for strict ordering is a little weird, but the assumption is that frame handling is distinct from packet handling and packet numbers are not available when frames are handled. (I think that's the principle we agreed on; that's why it's OK to use packet numbers in key updates, for example, something that I can confirm is borne out by implementation experience.) The case you describe here has the appearance of exactly that type of reordering if the two senders also generate their own ACK frames (though this seems very strange). But I think that the consequences of that sort of design are not our responsibility to deal with.\nI don't recall on us agreeing that packet numbers weren't available when frames are handled, can you clarify why that property is important?\nI agree with NAME on the principle of not requiring the enclosing packet's packet number during frame handling, and we use sequence numbers within frames to help with reordering. This was at least true in an earlier version of NCID frames, and we had discussed this principle then. This is also why the ACK_FREQUENCY frame in URL includes a sequence number. That said, I don't understand the issue with key updates here. Largest acked so far is a connection-level state variable that can be used for key updates. I suspect that I'm not understanding the example scenario. NAME can you elaborate on your key update example? NAME Is your example motivated by a receiver that does not remember the largest acked or one that does not wish to ack anything but the packet that was just received?\nI was under the impression that the argument was that largest acknowledged was not being tracked, but Ian made it clear that the concern was about not signaling it. That removes key updates as a concern. If remains unclear, then I can't really do better.\nNAME my example was motivated by either case, but primarily not wanting to ACK anything besides what was just received.\nPerhaps it's a naive model, but I had envisioned an implementation remembering a chain of ranges of packet numbers it had received or not. The implementation would save space by forgetting the earliest packet numbers when the chain got too long, and would construct an ACK frame out of the latest set of these ranges it had room for. In that mental model, at least, there's not really room for the idea that you might drop the largest-acked frame number from your memory because it's too old -- it's always at the most-recent end of your chain, even if you currently happen to be receiving packets that fill in the middle for some implementation-specific reason. (Implementation-specific to the peer, which you obviously can't know about or code for!)\nThat matches my model for this. It is also likely the case that the next packet you receive is largest+1, so there is no real advantage to forgetting the largest acknowledged. Not sending largest acknowledged is slightly different though. I can understand how an implementation might choose to send only ranges that have recently changed. My contention is that this is not a good choice and we should recommend against that.\nNAME to produce some text here that separates retention policy (MUST) from signaling policy (SHOULD)."} {"_id":"q-en-quicwg-base-drafts-db3ccae93d2dfe3cc9431e962c1673f9b53f46590f5e720fa4f29cab954781f7","text":"We don't ever really say that QUIC depends on TLS providing the properties that we claim TLS provides. So it's worth pointing that exposure. This is more so because some of the properties we claim are dependent on these properties.\n(Cleaned up duplicate instances of the same comment.)\nAs per comment URL I'll note that there is MITM protection in QUIC, but it is not immediate which is roughly why we have the concept of handshake confirmed.\nThe options for triage are close it, mark as editorial, or mark as design. This has a needs-discussion, and I'm seeing a lack of any discussion. So you might see where my thought process is going... NAME this relates to your PR on the QUIC threat model. Are you willing to provide some comment that helps us triage the issue?\nI think that we can take this as editorial, as it merely requires that we capture already agreed principles (we don't protect against an adversary who can inject packets). NAME assistance would be useful, but I can produce text if nothing is forthcoming.\nI can always provide comments, no guarantees on how much they'll help triage IIRC, this is an issue filed from a comment on the threat model PR that was several-months merged to potentially increase the scope of what that covered. I think that the original threat model that NAME and I put together, covering the handshake and migration respectively, is intended to be the start of that section -- as additional items are discovered that could use coverage there to help clarify QUIC's stance on whatever issue, they should totally be added. I agree with NAME that adding such items seem editorial, as they aren't introducing any new requirements and are only capturing current realities as specified elsewhere. (Of course, as that text is written, it seems logical that putting it all in words next to each other may highlight issues, and if the WG decides that we don't like how the current spec results in a particular security stance, we can open a design issue to change normative text elsewhere.) So, I'd probably consider this editorial, and ask NAME to potentially contribute some text or clarify what he's looking to have covered by the original comment -- once that's clear, happy to help get a PR up if needed (or NAME text is always good too!).\nSince this concerns the handshake, I'm not too confident in contributing text but I'd be happy to review and discuss. I believe it is correct that the issue is covered in existing text elsewhere but as NAME says, it makes sense to cover it in one place. The risk is that the reader is not aware how important it is to reach a certain point in the connection before trusting data, and especially to those seeking to create derived versions. From various discussions I'm not sure the importance is broadly understood. Additionally, I'm not entirely convinced that the current text fully captures the problem - not that we necessarily can - for example various stateless transmissions / redirects / token issues - might assume a level of privacy that doesn't exist in all cases, thus requiring stronger integrity elsewhere such as in CIDs, tokens etc..\nThanks folks. It seems we have a path forward, so I'm removing the needs-discussion label. The editorial process will naturally generate some back and forth but that is the same for all editorial issues."} {"_id":"q-en-quicwg-base-drafts-779b341bcde9dcda6ba2b1f9ae2311f699a32baa5667b3cb88b8c9252783f9b7","text":"Clarify reserved values for stream types, settings identifiers, frame types, and error codes. 0x1f N + 0x21 for the value of N = -1 is 0x02. Literal interpretation of the current text includes this as a reserved value. The parenthetical examples given in Section 11.2 make it clear that this was not the intention, rendering this PR editorial. The main motivation of this PR is not that the current text is inconsistent, but that parenthetical examples from 11.2 are necessary to correctly interpret the definitions in sections 6.2.3, 7.2.4.1, 7.2.8, and 8.1. Alternative wordings could be: \"0x1f N + 0x21 for N = 0, 1, 2, ...\" \"0x1f N + 0x02 for positive integer values of N\" \"0x1f N + 0x02 for N = 1, 2, 3, ...\" \"0x21, 0x21 + 0x1f, 0x21 + 2 * 0x1f, ...\" \"a value of at least 0x21 with a remainder of 0x02 modulo 0x1f\" \"a value of at least 0x21 that is congruent to 0x21 modulo 0x1f\" none of which is better than what this PR proposes."} {"_id":"q-en-quicwg-base-drafts-893e9ddeda21a96f12134ef2bdbb80ec4b47c361f77bfc0dff538fff495846d0","text":"This defines a limit on the number of packets that can fail authentication before you have to use new keys. There is a big hole here in that AES-CCM (that is, the AEAD based on CBC-MAC) is currently permitted, but we have no analysis to support either the confidentiality limits in TLS 1.3 or the integrity limits in this document. It is probably OK, but that is not the standard we apply here. So this might have to remain open until we get some sort of resolution on that issue. My initial opinion is to cut CCM from the draft until/unless an analysis is produced.\nIn the interest of transparency, I have added one more commit here. When I did the same for DTLS, ekr observed that if you have updated, you might safely stop trying to accept packets with the old keys rather than killing the connection. That results in loss for any packets that genuinely did want to use the exhausted keys, but that is probably less disruptive than having the connection drop. Of course, you can't always update, so closure is still likely necessary in some cases.\nFor what it's worth, Martin, Felix, and I put together a document which might help users do this themselves: URL\nNAME Could you use that to come up with 2^16 limits and put them in the doc? I’m really wary of letting implementors roll their own\nSure! Can you please file an issue with that request? Noting anything else you'd like spelled out in detail would be helpful, too.\ntl;dr We need to recommend limits on the number of failed decryptions. At one of the side discussions concentrated on the quality of the analysis of the and the applicability of that analysis to QUIC. Felix Günther, along with Marc Fischlin, Christian Janson, and Kenny Paterson have done a little analysis, based on and the we used to set limits in TLS. There are two relevant limits in this analysis: The confidentiality bound, which is usually expressed in terms of advantage an attacker gains in being able to break confidentiality of records/packets as they get more data from valid packets. For instance, in AES-GCM, the analysis shows that after seeing 2^24.5 packets (almost 24 million) the attacker has a 2^-60 better chance than pure chance of winning the IND-CPA game. The forgery bound, which is again expressed as an probably of an attacker successfully creating a forged record/packet. Thanks to the analysis done previously, we know that AES-GCM and ChaCha20+Poly1305 have a forgery chance of 2^-57 after 2^60 and 2^36 forgery attempts respectively. The current text just points to TLS 1.3, which recommends the use of key updates to protect confidentiality. But TLS has a forgery tolerance of 0: a single failed forgery attempt causes the connection to break. QUIC (and DTLS) allow for multiple forgeries to be ignored, so we need to factor that in. As discussed in if we want to maintain security, we need to consider both confidentiality and integrity. For that we need two independent limits: The number of packets that are successfully encrypted and decrypted. The number of packets that are unsuccessfully decrypted. We have the former already. The latter is what this issue tracks. Once the limit is at risk of being exceeded, updating keys will force the attacker to start over. Based on Felix's recommendation I am going to suggest that we limit forgery attempts to 2^36. I have confirmed this with a close reading of the paper and concur. There is a significant difference here between AES-GCM and ChaCha20+Poly1305 but we believe that setting a single limit is the best approach. I will create a pull request to document this approach shortly. We could split the recommendation and allow a larger limit for AES-GCM, but I don't believe that 2^36 (~68 billion) is a significant barrier. For reference, if you were able to saturate a 10Gbps link with 50 byte IP packets that contained plausible QUIC packets, and the recipient didn't treat this as DoS traffic, it would take 5 minutes. If we assume that are more likely representative of potential packet rates, then the limit goes up to 10s of hours. I would point out that that volume of unproductive traffic is probably a good reason to use something other than a key update to deal with the sender. A reminder that the underlying analysis assumes that packets are all 2^14 in size, which is the record size in TLS. This is ~10 times larger than typical MTU on the Internet and many QUIC packets might be smaller than that, so - at least for AES-GCM - recommending a 2^36 limit is more conservative than necessary. However, if we were to allow for different limits, we'd have to address the question of packet size, as a 2^16 byte packet will be used by some deployments. For ChaCha20+Poly1305, the only value that matters is the number of packets, so we avoid that question. Quoting from Felix's email for reference (with permission):\nNAME Can I request that you review this analysis for correctness? I know that you had reservations about the claims here.\nAnother (simplier?) option could be to just update keys much more often. I've done performance tests with msquic on the effect of updating the key ~once per round trip. There was no noticeable effect on performance. In fact, more often than not, the key update perf tests happen to perform slightly better (within noise tolerance) than without key update. I'm not suggesting we advocate once a round trip or anything near that; more like every 2^20 packets (or 2^30 bytes?) sent or received (including failed decryptions). As I understand it, this is an ultra conservative number as far as protecting from any kind of attacks, and should be free as far as any performance impact on the connection. This also has the added benefit of exercising the key update scenario more often, which should hopefully improve interoperability of the feature.\nI support a change like this, but I am not qualified to say what the thresholds should be. There certainly isn't a reason to push it to very high limits. This change would not break interoperability, so it's a good one. Banning CCM might break interoperability for someone, but I'm not too concerned, particularly if we can update all the browsers to not offer it. A somewhat bigger concern is the lack of key update support in a lot of implementations: I read 9 of 15 servers as supporting it.\n(Tentative) good news on CCM: Kenny identified a security analysis paper and it seems relatively simple to go from that to numbers. I am not personally concerned about support for key update. 9/15 isn't 15, but that doesn't bother me. It turns out to be a little fiddly to implement, so if people decide to drop connections that hit these limits instead, the effect isn't that bad. Of course, if you want to follow Nick's suggestion, that works too (I remember Dan Bernstein seriously suggesting a cipher that updated every record, back when I first proposed this scheme for TLS).\nIf we're looking for a new bound to initiate a key update, there's precedent in the SSH RFC to rekey after every 1 GB of data transferred. See\nI should point out that my \"9 of 15\" figure is based on the interop matrix, where the client initiates an early update and the server responds. That is a different thing than a server initiating an update when required by the spec, which fewer than 9 servers might do.\nBased on by Jonsson, I have calculated the bounds on the number of forgeries and encryptions for CCM. The numbers aren't great, but that probably doesn't matter much for the purposes to which CCM is generally put. (I'm using the notation from the here because that is easier to type than some of what is being used in other papers. For reference, q is the number of genuine encryptions, l is the length of each in 16 byte blocks, v is the number of forgeries, n is the key length in bits, t is the tag length in bits.) Jonsson puts the advantage an attacker has over a generic PRF at . If we assume the same numbers in the AEBounds paper for record/packet size, (this is for consistency, most QUIC packets will be much smaller, in the order of 2^7, but we might also go as high as 2^12; this being dictated by the MTU). For AES, . To keep the advantage an attacker has to 2^-60 (to match the analysis in TLS), we therefore need to keep q to below 2^24.5. Somewhat unsurprisingly, that matches the numbers we have for AES-GCM. Jonsson puts the advantage for an attacker over a generic PRF at . As the first term is negligible even for large v (up to 2^64), we consider the second term alone and aim for a bound on the advantage of 2^-57 to match the analysis for other ciphers. That leaves us with . As q is already established to be 2^24.5, we can say that v should be limited to 2^25 (or if you want to be precise). (As a side note, for t=64, as in AEADAES128CCM8, the first term becomes relevant and our security bound limits the number of forgeries to 2^7, which is probably a bit limiting in practice. That's good justification for not enabling AEADAES128CCM_8 by default; though different applications might use a different target bound than 2^-57.) I'm an not a cryptographer. These papers all read like chicken-chicken-chicken. My calculations are not infallible either.\nThe shape of the bound seems plausible but I'll start a thread with the cryptographers to check everything\nI reproduced Martin's analysis of CCM based on of the bounds by . (Myself not being an expert in producing such bounds, I found the second-level confirmation by Rogaway to be helpful.) I essentially agree with the numbers, except that --when following the same metric as the -- the advantage over an ideal PRP (instead of PRF) should be considered. This simply means losing a 1/2 factor in the bound: Confidentiality bound: yields an advantage of . I'd hence say it's fine to keep TLS 1.3's recommendation on sending max. full-sized records. Integrity bound: , hence yields an advantage of (matching the other analyses). Most importantly, I concur with Martin that AEADAES128CCM8 would need either much lower bounds or a different risk assessment: applying the same limit of would bring the advantage incurred through the bound up to . Would it still be the best approach to define the same anti-forgery limit across all cipher suites, even if this means going down to ? Of course, I'm happy to discuss more with NAME !\nThanks for checking this NAME I was a little concerned about the PRP/PRF split, so I'm glad to have you correct me. I ultimately went with per-AEAD recommendations, and a lower limit for CCM. That's mainly to establish some sort of uniformity around 2^-60 and 2^-57. Even if those are basically arbitrary choices, at least they are defensible (I think!) and using them uniformly establishes the right expectations about what the standard is. I'm comfortable with specifying different limits for each AEAD in specifications. In practice, however, I expect that a far lower tolerance for forgery attempts. Assuming that these numbers work out, a review of the pull request would be greatly appreciated. If there are other relevant papers, I'm always happy to pull those in as well.\nI realize that I didn't respond to NAME That advice applies to the limits that TLS specifies already; this issue is primarily about responding to forgeries. That said, 1G is pretty good advice. However what we've seen from this analysis that 1G results in a lower safety margin than TLS aims for. But it's close enough that if the AEAD is as good as the set we have here, you are probably OK. However, this does depend a lot on the specific AEAD. I'd be uncomfortable using so simple a guide in the general case.\nNAME points out an error in my calculation. I had based my calculations on direct mapping of in the CCM analysis to in our calculations. That is, was the number of blocks in the message. But the analysis by Jonsson defines this as: ! The definition of Beta is the CCM function, so this function really reduces to 2 times the length of the message (in blocks), plus 1 (to account for the additional encryption). We can ignore the extra 1 as that is absorbed by the tag length. I'm updating the numbers in the PR. The result is another halving of the number of packets. NAME NAME it's pretty clear that I'm out of my depth here, so your input would be highly valued.\nGood catch, yes, the double passing over the message requires a factor 2 here. (The AD a may also be up to 3 blocks if I understand the header format correctly, but I guess that's negligible compared to the conservative 2^10 message size.)\nGood point about the AD. The AD - at least for QUIC - is always sent, so that fits in the same space. We don't necessarily need to double count the tiny number of blocks, but as you say, counting that only once yields a negligible difference. If we had a large AD that wasn't also transmitted, that would be different.\nAgreed. To be precise: if I'm not mistaken the AD isn't counted at all right now ( accounts for the max. payload blocks), but of course the difference is still negligible as long as the AD is small.\nThanks for the edits!"} {"_id":"q-en-quicwg-base-drafts-c5ce1c742b7429aa9c38e9b9478ae1f377cc346cc12805053a445e6d57124e42","text":"Any chance we could also update the initial salt?\nI was saving that for the final version, by popular demand. If you have a reason to change, then I can do that (I'll also want to update the keys for Retry in that case).\nI agree with NAME Let's leave the salt update to the final version.\nWith Google's QUIC deployment, we're already experiencing version ossification due to middleboxes inspecting initial packets - some of our traffic is stuck on older versions of GoogleQUIC because of this. Changing the salts every time we change the version number will help remind everyone that this is not an ossified aspect of the protocol. The cost to implementors is pretty much zero. NAME out of curiosity, why the pushback?\nWhat NAME says makes sense to me, so I would be in favor of shipping draft-29 with new salts.\nI updated the salts and the test vectors in quic-go, and I can confirm that the values match.\nThe draft describes that a client SHOULD NOT reuse a token in different connections (URL). However, I miss the argument that the server SHOULD NOT construct the same token multiple times. This argument addresses the risks of client tracking via tokens if a deterministic approach is used to construct them.\nMaybe ?\nNAME do you mean already covers this?\nYes, my view is that and the corresponding mailing list thread (URL) covers this issue. They discuss about unlinkability between the tokens issued by a server.\nNAME do you agree that this is (mostly) covered by ? If yes, can we close this issue and resolve it as part of ?\nNo, does not cover this particular point. makes the point that a network observer should not be able to derive information such as a timestamp from observed tokens. However, this issue says that the construction of the token should not be deterministic. To illustrate this problem, assume that the token is more or less the hash of the client's source address. Thus, the client receives upon repeat connections each time the same token, if its source address is identical. As a result, even if the client uses each token only a single time the corresponding connections established with these tokens can be correlated to each other by a network observer.\nNAME could be modified to cover your issue? could you comment there?\nNAME Ok, I will make proposals within to cover this issue.\nNAME the automation failed here - this should not be moved to \"Text Incorporated\", it should just be closed As stated in URL, my understanding is that a token sent in a NEW_TOKEN frame SHOULD be encrypted. The only exception would be the case of the token carrying only an opaque and randomly-generated key identifier in a stateful design (i.e. a server using a key-value store to retain the information of previous connections). Otherwise, an observer can use the information contained in the token sent in an resuming connection, to correlate that connection to the previous connection that issued the token. For example, if a server includes the generation time of the token in plaintext, an observer can extract that information found in an Initial packet to nail down the connection that issued the token. While I think we might be call this an editorial issue, I think we should clarify this as a normative requirement (e.g., SHOULD). Hence bringing the discussion to the mailing list as well. Kazuho Oku"} {"_id":"q-en-quicwg-base-drafts-5dbed44c140a9a58effd88fb623334977ba5a4969a9e86e08c2ebce789192771","text":"When we prohibited the use of application-layer CONNECTION_CLOSE in these packets, we should have updated the text in the description of the packets also.\nI think the spec is ambiguous when it says in section 17.2: and: >Handshake packets MAY contain CONNECTIONCLOSE frames. But CONNECTIONCLOSE can mean 0x1c or 0x1d. Are both permitted during the handshake?\nNAME In this case, the intent is to allow both. Unless explicitly specified, I would read the text to mean both types of CONNECTION_CLOSE frames.\nOkay, if more people think that way, we can close with no action.\nSo on the surface, there is no inconsistency: the two sections permit CONNECTIONCLOSE. But the question about the different types of CONNECTIONCLOSE is more relevant than NAME response might imply. We currently require that CONNECTIONCLOSE of type 0x1d (application close) is not sent in Initial or Handshake packets. So that's a drafting error, and we might reasonably take this as an editorial issue. However, if the question is about what a reaction to receiving an application CONNECTIONCLOSE in one of those packets, we might consider a different posture. If you get an application CONNECTIONCLOSE in an Initial packet that is otherwise valid, you probably want to drop the connection. And you probably don't want to start sending CONNECTIONCLOSE in response every time, because that leads to madness. But this is a general problem: if you detect and error and that error keeps being repeated, it probably isn't a good idea to generate a CONNECTION_CLOSE in response to every infraction.\n(removing my comment as NAME posted what needs to be said couple of seconds earlier).\nI'm going to re-close the can of worms I half-opened in my earlier comment and take this as editorial. From memory, the larger question of how to deal with junk is something we've discussed several times. If anyone wants to take that up, I'm happy to do that, but let's at least correct the obvious error.\nIt does make sense that we only allow 0x1c during the handshake.\nNAME : Apologies, I misspoke -- I was speaking to interpretation of text, rather than the actual question itself."} {"_id":"q-en-quicwg-base-drafts-c55e7054776b0cfa5f67cbcf218d5a9b3a92cd5acb347b36c807e486ef1c559b","text":"Editorial NiTs when reviewing the invariant text. These NiTs address readability issues, and do not intend to change the meaning of the text.\nIs there an associated issue?\nThanks Gorry, these are all quite helpful. 1) I understand the acronyms DCID and SCID, but these aren't really expanded on first use in sect 5.1. 2) There's mention of \"the high bit\", but there doesn't seem to be any statement of bit order saying. Is it worth declaring network byte/bit order earlier in section 4? This also clarifies the placement of fields within a byte, which was basically assumed knowledge otherwise. 3) Personally, I don't think: \"The length of the Destination Connection ID is not specified\" is a great choice of words. Surely, this needs to be specified somewhere for a protocol instance to use it? This reads better as something like: The length of the Destination Connection ID is not encoded in packets with a short header and is not constrained by this specification. 4) The brackets here seem to point to connection IDs in general, not the DCID \"(see Section 5.3)\". Is it worth writing that as \"Connection IDs are described in Section 5\", rather than use brackets? The same is true for versions. 5) In section 5.3: \"the wrong endpoint\" is taken as the wrong transport endpoint, but some of our IP and subIP colleagues would use the term endpoint for other uses, so one insertion of \"transport\" would perhaps help? 6) In the appendix A, this: \"QUIC forbids acknowledgments of packets\" - still has me wondering what is actually intended to be understood, because the list of bullets is not guarenteed ... : /The last packet before a long period of quiescence might be assumed to contain an acknowledgment (it should be assumed that QUIC could allow acknowledgments of packets that only contain ACK frames)/ I'm not sure what to say here I think the combination of not... forbids threw me off the meaning. Changes proposed in\nThanks for changing SCID/DCID to \"Source Connection ID\" and \"Destination Connection ID\", that seems like a good change."} {"_id":"q-en-quicwg-base-drafts-d3abad4ea4d4e6a17a4e3521b523b7b281b524f231c3ae9c8096bb3d8523639f","text":"Thanks to Lucas for finding the study. Note that I found at least one middlebox with a default timeout of 30s; that was not a home gateway included in this study.\nsays that \"experience shows\" you should send packets every 15-30 seconds, RFC4787 notwithstanding. Ideally, we would have a source that we could reference for this \"experience.\"\nMaybe see URL\nI'm going to say that while the 15s floor is a good requirement, I don't want to cite RFC 8085 for this gem, which I think is straight-up misinformation: \"NATs require a state timeout of 2 minutes or longer [RFC4787].\" We do cite RFC 8085 elsewhere and that's fine, but the fact here is that the IETF has requested that NATs provide a state timeout of 2 minutes. That does not mean that they do.\nI have seen anecdata as well that showing 30 seconds is an important threshold. I think the ref we now have is good."} {"_id":"q-en-quicwg-base-drafts-0f8ff62f22cbf1e4272aa9378a65bcbf3b7e914c439237b08dee8014d7fb0001","text":"This moves text from the section on 0-RTT to the section on continuing after Retry. Though 0-RTT is the only thing that is seriously affected in the current design, Initial packets might also be affected if we ever wanted to protect those with anything other than a static key.\nsays that after receiving a Retry, clients MUST NOT reset the packet number for any packet number space, and references 17.2.3 for more info. However, 17.2.3 only explains why this is the case for 0-RTT packets. We should either narrow the prohibition or expand the explanation.\nOr maybe move the explanation that currently exists 17.2.3 to 17.2.5.3 and use it as an example (to avoid it being read as the only reason).\nNAME does NAME editorial change address your issue?\nYes, this is fine."} {"_id":"q-en-quicwg-base-drafts-bfd3df17e81a128d19612a3151c67988747198f2b9f2116a2b8ceb90b5ff4d95","text":"I'd always heard this as \"Authenticated Encryption with Additional Data,\" so it was -tls that sounded funny. But looking at 5116, it's actually \"Associated\" data and -tls is correct. Fixing in -transport.\nI read this as referring to Authenticated at first."} {"_id":"q-en-quicwg-base-drafts-e9fe73e034bfb871c42b9693a8fa7960cb3ec8fc3629a55430a684abc759311b","text":"This does a little shuffling to move some of the context-setting up. It then adds some more meat to the introductory section.\nThanks for the review Jana. I've taken most of this, with small tweaks only. Two things I didn't take: Interleaving explanation with references to sections sounded appealing, but it turns out to be both harder to read as the links break up the flow, and harder to scan for links (the structure section serves as an abbreviated and accessible ToC). I didn't move the connection piece below streams as I think that - at this level - talking about the connection is important context. I've moved the picture to the section, which didn't have that simple diagram (just some more complex ones), and I think benefits from it.\nWhen I was reading quic-transport I found it very difficult to deal with the forward references. Streams are presented before connections, the overall state machine isn't introduced up front, packets come long after frames, and both are long before the wire formats are described. It might worth revisiting the section ordering.\nAt some point, transport was hugely restructured, and I believe many of those changes were good, but some didn't work out that well. I find streams coming before connections particularly odd.\nThe larger idea is \"high-level concepts, then wire encoding\" which I think makes sense. It puts the things that someone needs for a conceptual understanding of the protocol in one place, and the things that someone needs to implement the protocol elsewhere. Within the high-level concepts, I also find it strange that connections don't come before streams. However, the \"connections\" group also includes things that really don't deserve pride-of-place as the first couple things to be discussed, either.\nI think that the main concept you need is that QUIC has a connection that is shared state between two endpoints. The later sections about connections go into more detail than you need to understand the stream sections. But I note that we don't properly provide that basic knowledge in our introductory parts, which are quite lean. Leanness is good, but we might have overdone it slightly in this regard.\nHaving worked my way through the spec, my suggested resolution to this is that Section 5.3 needs to be earlier, perhaps much earlier. At the least, it should open Section 5; some or all of it might belong in Section 1.\nThanks for working through this!"} {"_id":"q-en-quicwg-base-drafts-3aa95478e0b778b217085ab4e4dc734a4d7a005ea3c908eb4dab1e64051838a2","text":"I see a couple of places where we require that values have high entropy. For instance of the packet header. The remainder of the first byte and an arbitrary number of bytes following it that are set to unpredictable values. The last 16 bytes of the datagram contain a Stateless Reset Token. and to ensure that it is easier to receive the packet than it is to guess the value correctly. In general, this means that things contain sufficient entropy (so, for instance, you might have a value with some fixed values and some high entropy values). In any case, I think the first of these should be \"indistinguishable from random\" and the second should be \"containing at least 64 bits worth of entropy\"\nLGTM"} {"_id":"q-en-quicwg-base-drafts-9b0ae3a6c445b6c16701623b50bb7cabd751468d1f859ec230a1ea2bf8c8e9e4","text":"This was never explicit for the client as it is usually not possible. in a way that perhaps was not anticipated.\nis complete. Consequently, a server might expect 0-RTT packets to start with a However, this is not necessarily strictly true, because you might have the ACKs in 0.5RTT. Now you might say that the TLS stack reports the handshake as complete as soon as it receives Finished but that's an implementation detail\nI think that this is true. The server can send before then, but we require that the client not install 1-RTT keys until it considers the handshake complete.\nThat's not entirely true. It's the server that needs to hold back keys (namely, the 1-RTT read key) until the handshake completes. The client can use keys as soon as they become available.\nI think NAME is correct in sense that the following three events happen at the same moment on the client side: handshake is declared complete TLS stack provides the 1-RTT read and write keys TLS stack provides ClientFinished\nMartin is correct. There are two different forms of handshake completion: complete when 1rtt can be sent and received, confirmed when the peer is known to be complete. The client cannot receive 0rtt acks before installing 1rtt keys, hence before hanshake is complete.\na nit, but LGTM"} {"_id":"q-en-quicwg-base-drafts-213d8a56d76b11bc07300610ef1ca02bb88662432eed518b74e39850cc5c6b2f","text":"Those packets that are not authenticated can be discarded, but only if processing them has no permanent effect on existing state. This attempts to explain that.\nS 5.2. says: Invalid packets without packet protection, such as Initial, Retry, or Version Negotiation, MAY be discarded. An endpoint MUST generate a connection error if it commits changes to state before discovering an error. I'm having trouble parsing this.\nLGTM"} {"_id":"q-en-quicwg-base-drafts-6a7c226a2a349e8725a67424f5c4c4ec81f5e702fe7c404240ac52dd4ad14d8f","text":"This is a small tweak of Gorry's suggestion.\nLooks good to me.\nI promised to raise an issue relating to this text: \"It can also improve connection throughput on severely asymmetric links; see Section 3 of [RFC3449].\" This partly addresss the case, but it does not yet note the implications of the return path traffic on congestion of the return path. I think this is a useful addition, because it helps alert the reader of the tradeoff ... which can be quite important for half-duplex/shared radio, etc where the spectrum consumed sending an ACK can even outweigh the cost of sending a data packet (because they use different design of PHY). I suggest: \"It can improve connection throughput using severely asymmetric links and can also reduce the volume of acknowledgment traffic using return path capacity; see Section 3 of [RFC3449].\""} {"_id":"q-en-quicwg-base-drafts-8ab194b7a5c73b128004a98751095379c72b24838acc49fd758ab6d4856af565","text":"PTO packets always have to be ACK-eliciting, not just during the handshake, per existing text: \"When a PTO timer expires, a sender MUST send at least one ack-eliciting packet in the packet number space as a probe,...\" Also removed an \"unless\" clause, because if there's nothing to send, you don't arm the PTO timer."} {"_id":"q-en-quicwg-base-drafts-2625890c489b20d5fc6e6c0d87ee8a536a4d118835cf7f46ee488cc5954733ab","text":"I believe this covers the discussion on the issue. It also introduces some MUSTs which should've always been there. I don't have a strong opinion on the use of RFC 2119 language here, and I'm happy to remove all of that language if folks think that's better. I would like not to bikeshed this, but I would also like to be on a beach right now, enjoying a lovely drink and watching the sunset.\nNAME wrote the following in his WGLC review (): Splitting this off from as a potential design issue.\nThis seems like a reasonable thing to do.\nI'd support a can or a MAY, as an example when you might reset min_rtt, but I don't think we need a SHOULD here.\nminrtt is currently only used for filtering unexpectedly large ackdelay's reported by the peer. If the path RTT truly inflates then the peer will be able to report much larger ackdelay values and get a much lower RTT sample accepted. But I don't think this is a security problem in practice. The only other consideration is that in future QUIC implementations may reuse this computed value for other purposes like delay based congestion control. It's hard to make a judgement call on how important it is to reset minrtt. I guess I'd be happy with a SHOULD but I can live with a MAY.\nJana made a good argument that persistent congestion could be declared due to an overly small RTT, so resetting minrtt could avoid an endpoint repeatedly declaring persistent congestion. That being said, the RTT samples also need to be too low to spuriously declare persistent congestion, so this is increasingly sounding like an attack scenario, rather than having a poor estimate of minrtt. I think a SHOULD is Ok now, but I'd like to state the best rationale we have for why it's a SHOULD and neither a MUST or a MAY.\nI think there's one tricky point in resetting the minrtt after handshake confirmation. That is that all the packets can be acked with delays. IIRC, one of the reasons we were happy with using minrtt for capping latestrtt is that Initial and Handshake packets do not have ackdelays. If we are to suggest that an endpoint MAY reset minrtt when persistent congestion is declared, I think we should consider the risk of not having good estimate of minrtt, as we would have during the handshake.\nI don't see that as a risk here. min_rtt doesn't consider any discretionary delays.\nNAME That's exactly the problem. will be inflated by , unless and until the peer sends an immediate acknowledgement. And until that happens, will as well.\nI think that's OK. In practice, immediate ACKs do happen, and even at low rates, this should resolve to a \"true\" minimum. And if it doesn't, then you have worse performance than you might had otherwise. You have to have a pretty poor network (or a terrible ACK generation regime) for minrtt not to eventually resolve to a useful minimum. Explaining this risk is fine, but the endpoint that does this made the choice to throw out minrtt and that's just a consequence of that.\nOn one hand I agree that we can expect to see immediate ACKs, especially when HTTP/3 is used. At the same time, I am not sure if resetting minrtt when observing persistent congestion is better than other cases where we might reset minrtt, but that we do not currently mention. Change of IPTTL could be one of the examples in this regard (IIRC NAME pointed out that we do not use that value for detecting path changes). To summarize, I think talking about resetting minrtt might be fine, though I am not sure if that discussion should be tied to persistent congestion.\nTo give a specific example, one problem that I know with the current definition of minrtt is that it cannot detect and correct RTT estimate of a congested path. The sole purpose of minrtt is to discard excessive ack-delays being reported by the receiver. This works by capping the lower bound of adjustedrtt to minrtt. The problem here is that minrtt is a value that is typically collected when the path is almost idle. Once the path becomes congested, the true RTT becomes much greater than minrtt. For instance, if RTT-on-idle is 10ms and if RTT-on-congestion is 50ms, minrtt would be 10ms, and therefore cannot be used to correct RTT estimates when the peer reports an excessive ackdelay of, say 25ms. The RTT estimate would become 25ms in such case. I tend to think that this might be a bigger problem of the original issue, and therefore that we should state that an endpoint can periodically re-collect minrtt estimates, if we are to make a change to the current definition of minrtt.\nBased on agreement on the resolution in , which introduces normative language, let's mark this design.\nSeems fine. I'm not 100% on the strength of the recommendation to reset after persistent congestion, but it's sensible advice.The resetting of min RTT is also specified with LEDBAT or BBR, which differ from each other and from this proposed spec. This is a complex issue, because resetting min RTT in LEDBAT \"because of congestion\" is a variation of the latecomer advantage. But as long as the text applies only to RENO as specified here, then I am fine with it.I like the cautionary text. Just one suggestion but LGTM regardless. Thank you for working on this."} {"_id":"q-en-quicwg-base-drafts-4ccd572f6df056c52b21b9e862c59b973a35fadabc37b9ee0d6667022fb9ab59","text":"This is an error that can be detected in frame parsing without additional context.\nOver on URL we had a discussion about whether a MAXSTREAMS frame with value exceeding 2^60 generates a FRAMEENCODING or a STREAMLIMIT error. Draft 29 Section 19.11 says: corresponding type that can be opened over the lifetime of the connection. This value cannot exceed 2^60, as it is not possible to encode stream IDs larger than 2^62-1. Receipt of a frame that permits opening of a stream larger than this limit MUST be treated as a FRAMEENCODINGERROR. but as NAME points out, Section 4.5 says with a value greater than 2^60, this would allow a maximum stream ID that cannot be expressed as a variable-length integer; see Section 16. If either is received, the connection MUST be closed immediately with a connection error of type STREAMLIMITERROR Meanwhile, for the STREAMSBLOCKED frame in Section 19.15 we have: I suspect both Vidhi and I are correct, bogus MAXSTREAMS can be treated as either STREAMLIMIT or FRAME_ENCODING. If so, its probably a minor editorial fix to state either/or in both the sections.\nNAME suggested that it should only be a FRAMEENCODINGERROR, as it's invalid to send a MAX_STREAMS larger than 2^60 under any circumstance, regardless of connection state, and I agree that's the clearest application of our principles.\nI agree that requiring the use of FRAMEENOCDINGERROR is fine. Even though we occasionally allow an endpoint to use multiple error codes when there is more than one way of checking the status (i.e. FLOWCONTROLERROR vs. FRAMEENCODINGERROR for STREAM frames), the check required in this particular case is purely unrelated to the connection state. Hence my +1.\nI would be happy with a single error too. I'm not fussy about the actual code used as long as the spec is clear and consistent. That means that we'd need to reduce STREAMS_BLOCKED to one error code too, and potentially any other situation that allows either.\nI'm going to drop . I agree with the comments suggesting that FRAMEENCODINGERROR is the right answer. Note that you might still receive a different error, as we permit endpoints that detect multiple errors to send any that it detects, and this case clearly justifies use of STREAMLIMITERROR."} {"_id":"q-en-quicwg-base-drafts-b5b12890d1a66a9f72ce8654350cb85b43ee44b3ed5283dd7e2a20db88b858e5","text":"The original text here might be read to imply that a server can't send Retry if it receives an Initial that contains a token that came from NEW_TOKEN. That's wrong."} {"_id":"q-en-quicwg-base-drafts-c0444bd4d2c090276002b62d2fe0674a5fa041b705eb9c2fc255890ab144f30c","text":"An update to I read the 4091 text as implying that probes were sent upon a loss."} {"_id":"q-en-quicwg-base-drafts-cea706f20e2ed062fcaa9fb750179f98e98913ee45e8cf7baf4268e1d3031896","text":"I noticed this in the Transport Draft (ID-30): Is this intentional? Should an invalid transport parameter result in a frame encoding error?\nI believe this aligns with our principles that a framing error which can be detected with no connection context is a FRAMEENCODINGERROR. Is there another error you'd suggest?\nMy concern is that transport parameter values have nothing to do with framing and using FRAMEENCODINGERROR is therefore misleading.\nWe only ever applied this principle to frames though, so I'm not sure we can use it as a precedent here. The name is indeed confusing, as it's FRAMEENCODINGERROR and not FRAMINGERROR or ENCODINGERROR.\nI always rationalized this as \"the encoded content of the CRYPTO frame is erroneous\".\nWell, if we've been through this before, I won't insist -- it's just something that jumped out at me.\nThat's even more confusing. The CRYPTO frame itself is totally fine. I'd be in favor of changing it to a TRANSPORTPARAMETERERROR."} {"_id":"q-en-quicwg-base-drafts-05e47be171b7536e987cc05083b4ae42340aa982c87e24ad951d6eedcdc2177f","text":"In the HTTP/3 draft 30, section 4.4 on Server Push says and then adds in 10.4 (URL) I don't think normative requirements belong in security considerations except in reference to that requirement elsewhere. Also, requirements in general should clearly state who is responsible for implementing them, whereas the latter requirement implies a client (maybe). And sending CANCEL_PUSH is kind of like the MUST NOT, but not really the same thing; it seems a bit weak in response given that the server just tried to poison the client. I think the server MUST is an interop requirement because the client MUST NOT is a critical security requirement. I think both should be described in Server Push and merely noted in the Security Considerations, since they are really important. Way more important than the spec reads now. My guess is that this is editorial, but is the kind of thing that should be fixed before IESG last call.\nThe reason it's only a SHOULD CANCEL_PUSH is that the server can't know with certainty for which origins the client considers it authoritative. The server only knows for which origins it believes that it's authoritative. So having received a push the client considers out-of-bounds, the client can't know for sure whether it's a poisoning attempt or simply a difference in view. Yes, I think this is editorial, since the core request is to relocate existing text without changing the normative requirements."} {"_id":"q-en-quicwg-base-drafts-1fd6faf975b50084359f2d7e4577aaea36b8ec7f82df29309e92146221e794f6","text":"Proposal to clarify \"stream fragmentation\". Suggested by Issue .\nI don’t understand what section 21.6 is about… the title seems to suggests it is about IP Fragmentation and Reassembly Attacks, but elsewhere the spec already prohibits IP Fragmentation in section 14? And I have seen no encouragement to do any form of network-layer fragmentation - so I don’t understand a security consideration in this topic. Is this therefiore some other thing? It may about something different where a sender originates data with “holes” of missing packet to try and exercise the server? or soemthing? What is this?\nHi NAME 21.6 is titled \"Stream Fragmentation and Reassembly Attacks\". It refers to streams, not IP packets. The attack involves sending stream frames with holes such as and some receivers might allocate the entire stream receive buffer between 0 and 10000001. I found the section pretty clear, but we should make it clear to everyone - could you perhaps suggest text that would make it clearer?\nOK. I see. I can try a PR, if I can find words that avoid \"fragment\": Is this close enough to start: /An adversarial sender might intentionally omit to send portions of the stream data causing the receiver to commit resources for the omitted data, this could cause a disproportionate receive buffer memory commitment and/or creation of a large and inefficient data structure.\nDo we really need to avoid the term \"fragment\"? It's not the only overloaded networking term in the spec. Perhaps adding a \"Note that stream fragmentation is unrelated to IP fragmentation\" could suffice?\nI don't think this clarification is necessary. The context makes it clear what fragmentation this is, and talking about IP fragmentation only leads to confuse the reader. (I like the first part of your PR though, as I've noted on my review of it.)"} {"_id":"q-en-quicwg-base-drafts-6540593ae4677aef7585de6160765d9d3164b21a631d52b8ef09d13f9ef6c96e","text":"N was introduced in to describe how a sender might pace at a rate that allows it to send a congestion window's worth of data in a period that is shorter than an RTT. This PR clarifies text that goes into unnecessary detail, trying to describe assumptions (the network is spreading the packets out, for example, resulting in an ack clock).\nCan we make progress in this one?\nYes, I'm waiting for the design issues to get done before moving on the remaining editorial ones.\nI think that has happened now :-)\nNudge"} {"_id":"q-en-quicwg-base-drafts-ff9da4c64eeb20fcd23ba8879c3eab6a778713074e7ce6abc23518f241ff88d8","text":"Also removes odd MAY when suggesting that a sender can use TCP's pipeACK method. Addresses .\nIn Recovery-31 there are several references that I judge as normative and not informative ones. Section 7.8: \"A sender MAY use the pipeACK method described in Section 4.3 of [RFC7661] to determine if the congestion window is sufficiently utilized.\" A normative reference necessary to follow if one selects to use the optional method.Sectiom 7.3.2: \"Implementations MAY reduce the congestion window immediately upon entering a recovery period or use other mechanisms, such as Proportional Rate Reduction ([PRR]), to reduce the congestion window more gradually.\" Another case of a suggested mechanism, which if one is to follow it requires one to read that PPR reference. Secondly, why the alternative reference style.Section 8.3: \"Markings can be treated as equivalent to loss ([RFC3168]), but other responses can be specified, such as ([RFC8511]) or ([RFC8311]).\" So RFC3168 is the normative specification for the treat it equal to loss. RFC8511 is the TCP alternative back-off an experimental specification for the alternative. And RFC8311 is the process for performing other experiments with alternative responses. I judge all necessary depending on what one intendeds to do. So to my understanding this results in two downrefs to experimental specifications. I think that is acceptable in the context they are used in this document. But to be tried in IETF last call. However, I think I don't want to generally recommend them to be acceptable as downrefs so I have no intentions to promote them for the Downref registry.\nMoving discussion from the PR to the issue. NAME : I'm not the expert on what should be normative, but these were intentionally marked as informative references since they're all TCP specific RFCs. The pipeACK reference in particular seems like an odd one. If this is what the chairs want, then I'm ok with it as an editor, but these choices were intentional. NAME : I see what NAME is saying, but I also see some wiggle room in RFC 3967, Section 1.1: When a reference to an example is made, such a reference need not be normative. For example, text such as \"an algorithm such as the one specified in [RFCxxxx] would be acceptable\" indicates an informative reference, since that cited algorithm is just one of several possible algorithms that could be used. I am also somewhat concerned that trying to get the downrefs into the registry is more likely to run into trouble (Sec 3) This procedure should not be used if the proper step is to move the document to which the reference is being made into the appropriate category. It is not intended as an easy way out of normal process. Rather, the procedure is intended for dealing with specific cases where putting particular documents into the required category is problematic and unlikely ever to happen.\nThanks for digging up the relevant text, NAME Based on that text from RFC 3967, it doesn't seem like we need to make these references normative. I agree that the pipeACK reference is odd, but I think that's probably because of the use of MAY there. I'm happy to replace the MAY with a can -- it is meant to present an option, not a permission -- and make that reference informative as well. NAME : thoughts?\nI think at last RFC3168 should be referenced normatively somewhere (probably section 7.1 though) because you need to know what the ECN bits are in order to do anything with it. I guess because of section 7.1. RFC8311 should also be a normative reference.\nNAME RFC3168 is already a normative reference in the transport doc, which in my view is appropriate.\nWas wondering about this as well given the transport doc is of course a normative reference of this doc. However, as this doc uses terms like CE also directly itself I think the right practice is to also have it as a normative reference here, and as you have to read it no matter what, it definitely shouldn't hurt.\ntalks about ECN-CE markings, but really all of Section 7 talks about a response to an increase in ECN-CE count by the peer, which is defined in the transport doc. This does point to a small editorial issue. The text in Section 4.1 says: This really should be:\nGood point on the editorial issue.\nWith that change, the only thing we're left with is . This does require an understanding of ECN, and it has RFC 2119 keywords, but it's really an optional mechanism that is suggested. We could remove the RFC 2119 words there, since they're really unnecessary.\nMy point is that RFC3168 should be normative because not matter what you need to understand what an ECN-CE mark is and that is defined/specified in RFC3168.\nMirja has a point, see URL\n-transport is a normative reference. So 3168 is already (transitively) a normative reference. The question is whether this specific text requires anything from 3168 to understand. I don't see that it does, especially given the suggested change, which talks about a concept defined in -transport, not 3168.\nhas made these editorial changes; NAME can you see if anything else needs to be done here?\nI will note that it was never my intention to let any of theses Experimental or informational TCP specs to be added into the downref registry by this last call. So, the pipeack reference needs to be reformulated to not be a normative one. As currently stated it is a clear normative reference per IESG Statement ([URL]) , and especially \"Note 1: Even references that are relevant only for optional features must be classified as normative if they meet the above conditions for normative references. \" You need to find a formulation that normative describes what the implementation MAY do, and then you can exemplify with the TCP version. Then one in Section 7.3.2 I can accept as still being informative as the MAY statement is related to the primary aspect of the sentence and the reference to PPR is exemplifying. Still this is very much on the border and may still generate a comment in IESG. For RFC 3168 I don't understand why not making it to a normative one. There are no issues with its status as it is proposed standard. And simply the security consideration section still implies that you need to know what a ECN-CE mark is to interpret this. So please make this one normative.\nI'm ok with RFC 3168 being normative, particularly since it's already normative in transport, but would like to avoid making the others normative.\nNAME if pipeack is not normative then a reformulation is necessary of that sentence in Section 7.8 to not state is as a mechanism that MAY be implemented.\nThe sentence currently has a can, not a MAY: \"A sender can use the pipeACK method described in Section 4.3 of {{?RFC7661}} to determine if the congestion window is sufficiently utilized.\" But I sent out PR to make it clearer that pipeACK is an example, PTAL and see if that helps.\nNAME what PR proposes removing that first sentence do resolve the pipeack to me. So merging , and moving RFC3168 to normative will resolve this issue to my understanding.\nThanks! I'll update PR to include moving RFC3168 to normative.\nNAME NAME : I believe this is purely editorial, but I'd like to confirm.\nWe've meandered to a route that AFAICT makes reference types accurate to their existing intent, or avoids them becoming normative. I'm happy for this to be editorials.\nI think that this is the most appropriate response."} {"_id":"q-en-quicwg-base-drafts-937a4373e3515e77274be161709fa1a237ec16fd26056cf39780724018a8c5af","text":"Addresses . NAME points out in that we use in the transport draft and in the recovery draft. In reading carefully, I realized that in the transport draft was actually historical, and is the more appropriate term. This is an entirely editorial PR, and it's largely simple changes, despite the fact that it looks a bit big. It is a bit subtle though. Sorry about that.\nThis issue will list a number of minor editorial things that should be checked and likely addressed with some editorial fixes: Section 5.3: \"These delays are computed using the ACK Delay field of the ACK frame as described in Section 19.3 of [QUIC-TRANSPORT].\" This text implies that the computation of the delay is described in Section 19.3 of QUIC-TRANSPORT. However, I don't think it is described the calculation, only the definition of the field. Please reformulate to imply only field definition not calculation. Section 5.3: \"On the first RTT sample for the network path, that sample is used as rttsample. This ensures that the first measurement erases the history of any persisted or default values.\" I think this intended to cover cases where path changes are detected. But it is not well expressed. Does this need a reformulation and a reference into QUIC-TRANSPORT where it states when such a reset should occur? Section 6.1: \"Either its packet number is kPacketThreshold smaller than an acknowledged packet\" Nitpicking, assuming no PN gaps in transmission. Does that need a note? Section 6.2.4: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram or transmit data from multiple packet number spaces.\" This last part, I assume intended to generate additional datagrams to prevent single loss to impact. However, due to coalescing this is unclear in this context. Section 7: \"Endpoints can unilaterally choose a different algorithm to use, such as Cubic ([RFC8312]).\" I think this sentence should be explicit that in needs to be sender-based algorithms. I think we need to have the requirement on performing congestion control on the sender unless there has been explicit agreement with the receiver that they will do it. Section 7.2: \"Endpoints SHOULD use an initial congestion window of 10 times the maximum datagram size (maxdatagramsize), limited to the larger of 14720 bytes or twice the maximum datagram size.\" \"maximum datagram size\" is not used in Transport there need to be alignment here of what name to use. Transport has Maximum Packet Size, which appears to match the definition in the appendix of maxdatagram_size.\nThanks for your careful review, NAME -- these are good finds! I've addressed your comments below and in . I'll address point 6 in a separate PR. Section 19.3 has the following text, which does tell how to compute acknowledgement delay from the ACK Delay field: I've replaced \"computed\" with \"decoded\" in the recovery document to make it clearer. Coalescing applies to packets, not to datagrams. We've explicitly stated that a sender could send 2 datagrams to avoid any effects of coalescing. I think the text is pretty clear on this: \"An endpoint MAY send up to two full-sized datagrams containing ack-eliciting packets\". I've added \"sender-side\" to make the controller more explicit. To your point about making this a sender requirement, that is covered in the transport doc (end of ):\nBased on and , labeling this as \"editorial\"\nI think these issues are resolved once the two PRs are completely ready.\nExcept for the one erronous exchange of packet this looks good."} {"_id":"q-en-quicwg-base-drafts-2ac9508f8cbf28edc7ef3ba3854add1339158101f93b34390ef7283138fb400c","text":"Does this win the record for least amount of bytes touched in a PR?\nLars, I can't find it easily, but I believe that I pushed a 1 bit change at one point in this process.\nParameter Value field. Presumably the length in bytes."} {"_id":"q-en-quicwg-base-drafts-c8aed42aa1cee5b120e04eea2694e643074a44651d0cecba348e66fa9940cb23","text":"The description neglected to mention this. Obviously, the confidentiality protection (specifically, the lack thereof) is the part that needs to be highlighted, but we shouldn't neglect the other purpose of using an AEAD here."} {"_id":"q-en-quicwg-base-drafts-e234c15427075c7e5e2ea666be43ceb47d59b62e26f59c825f5a17734751eafa","text":"if we want to do that. There was a hole in the error code space, so I filled it with this. I haven't pulled Push out to its own error; it doesn't seem like it belongs with this one.\nWe might want to add a mention of this in A.4 where we talk about PROTOCOL_ERROR\n(I couldn't find anything on this, but I'm prepared to be corrected. Also, it's late, so if this is closed, I'm totally OK with that.) We're just in the process of doing full validation of pseudo-header fields (I know, seems late, what are you gonna do about that). And we realized that there is no error code for indicating this sort of error. That is, if pseudo-fields are out of order, or a request contains , or there are uppercase characters. contains lots of MUSTs, but it seems to rely entirely on the definition of \"malformed\" which leads to H3GENERALPROTOCOL_ERROR. This is fine, but not especially helpful in isolating problems. Could we define an error code for this specific case?\nH2 just says malformed is treated as PROTOCOL_ERROR. So H3 is providing parity. There might be an argument for more fidelity but I'm not convinced it would help much. It could require more contortions in the text - do you have different error for different types of malformed?\nInternally, we use a different error code. Of course, all internal error codes map to the same code for sending in RESETSTREAM/CONNECTIONCLOSE.\nI'm inclined not to do much, if anything, here. They all trace back to \"malformed,\" as you note, which leads to GENERALPROTOCOLERROR. The instances in which we use GPE are: Malformed HTTP messages Push whose request headers change across multiple promises As a substitute for more specific codes I wouldn't be opposed to splitting the first two off into a new / two new codes, because that would be consistent with our principle of indicating the general area where the failure occurred. I would push back on having separate error codes for specific ways the message could be malformed, because that's pretty expansive rabbit hole.\nOK, I'm satisfied with that response. I think that we'll just have to signal extra information in the text field. Happy to close this.\nNAME do you think an error code for \"malformed message\" would be useful, if we don't go into the various types of malformed?\nThat might be useful, yes.\nThis is a design change, albeit a marginal one.\nNAME let's merge the associated PR but keep this issue open and marked as \"call-issued\"\nKeeping open, as requested.\nClosing this now that the IESG review of draft 33 has concluded.\nIn fairness, I question whether the IESG review of the HTTP drafts has concluded. But as this is not an IESG comment, I'm content to close it. People have had time to object.\nYes I agree Mike. This was due to me being over eager with the close button. The issue may not have strictly been raised in the period that the IESG were directed to review the doc, and they have yet to ballot on draft 33. But either way there was no push back from the WG on this change since draft 33 was published."} {"_id":"q-en-quicwg-base-drafts-11df47ad2c21a9ba3fb74794acf5361660155c081cd75e1d50af7410992c296f","text":"Because it always is.\nI'm checking \"MUST treat\" in the transport draft. Sec 17.2.4 says: This is only the place where the type of a connection error is NOT specified. This part should be modified as \"a connection error of XXX\".\nThis was deliberate, but you can now send CONNECTION_CLOSE in this case, so it makes sense to use a code here."} {"_id":"q-en-quicwg-base-drafts-0f3a71de3488ebccc67b66178cd5535935f57b2273471dc5b8e6b40a3a162f02","text":"Mostly because QUIC doesn't prevent all forms of request forgery. Thankfully, this is dealt with in detail in the transport draft.\nNAME this looks good. Can you also change one sentence in Sec 1.2, from: to .\nHilarie Orman wrote: all this is assumed to be provided by QUIC. Section 10.2 says that all QUIC packets are encrypted; I'm not sure if that's true, or if QUIC has an option for \"non-modifiable\" without encryption.\nActually, we now know that this is not true, so this really needs to be fixed: -- from . The real solution here is to cite of the transport draft.\nThe version of QUIC required by this draft is always encrypted; that seems sufficient for the original issue. NAME issue is distinct, and worth fixing. I'll use this issue to track it.\n... and we could add some text stating the integrity point in Section 1.2. NAME suggests: \"comparable integrity and confidentiality to TLS over TCP\"\nCould we make some progress on this one?\nProgress made; the PR now needs review."} {"_id":"q-en-quicwg-base-drafts-879d9b7ee7c9e7ca0b64974745784b759ed9ea0be8f49604c4fbc29d0a2a673f","text":"This was definitely odd. Let's take another tilt at making it comprehensible.\nErik Kline said:\nYep, when it's called out, it's distinctly odd. Hopefully it's an easy fix, but writing is hardd."} {"_id":"q-en-quicwg-base-drafts-ccec8d29d156ab7b60ccaf21ccc04fcd86d8f73bc345da0f869abdb3dd856c19","text":"This is my best stab at it.\nErik Kline said:\nI thought that section reference was wrong, but Section 7 does indeed mention ECN twice. Once is almost too much for this section. I checked the history and the commits that affected this text likely are the result of a merge, and one that either git resolved incorrectly for us, or one that we messed up. Either way, this is worth fixing.\none nit"} {"_id":"q-en-quicwg-base-drafts-6042864a4839c6073412ce88d2fa4c102fcb81e75e00ab2f42958802ef0eed2a","text":"NAME said:\nI think \"can\" works equally well, given the precondition. I'll make that change."} {"_id":"q-en-quicwg-base-drafts-6f0e7da6bde8e04727cc4dcd984e72bfe28d64d5122e842cdbec7852054a3c3c","text":"NAME said:\nThis particular choice has not a lot to do with QUIC and would therefore bring in a lot of non-sequitur references, but a single sentence won't hurt. It has to be repeated, which is gross, but I can live with that."} {"_id":"q-en-quicwg-base-drafts-47f5105f5df8e9d356e0868f8034c0c57f2a8cb95deae3e3d006ca449268a8d3","text":"The three points are wordy enough it might be worth making this a bulleted list, however. Thoughts?\nNAME said:\nWith NAME fixes, this is good."} {"_id":"q-en-quicwg-base-drafts-950a2377f473ac8551fbeb0825ed7352d3cc562f7b46d4264a57d577e203fe04","text":"This was just old and needed a little bit of a refresh. Removing the recommendation, which was counter to established views and unnecessary. I will let others determine whether this is editorial/design.\nNAME said:\nI think that the point of this section is to highlight the possibility that routing based on client-selected values exposes servers to load that is balanced under client - and therefore attacker - control. I don't think that it needs to include such a strong recommendation. If people deploying load balancers are not concerned about this, then that is good. That makes the recommendation less good. As this is very old text, it needs a bit of a cleanup anyway. I'm going to suggest something along the lines of my statement above in a pull request and we can polish the wording there. I don't think that we should remove the Retry SCID language. That remains an option for servers, even if they don't want to exercise the option (whether load balancers look at Retry tokens is very much down to how the various functions are distributed). What could happen is that the Retry SCID is chosen so that the next initial can be routed as though it were chosen by a client, but in such a way as it results in the connection being sent to a lightly-loaded server instance. That is, if the load balancer routes inchoate Initials based on connection ID, knowledge of that routing algorithm is used to direct traffic. That said, I understand that what is more likely here is that Initials with inauthentic Destination Connection ID fields will be routed using other information: round-robin, random, five-tuple, or whatever.\nI think you're proposing that the text I quoted be deleted. That is satisfactory, and what I would recommend as well.\nReturning to an open state as the chairs directed in their plan.\nClosing this now that the IESG have approved the document(s).\nTo be safe, let's mark this design."} {"_id":"q-en-quicwg-base-drafts-4deb661a1607aedfbe35ea3ad3f5252484928306a6caed434ec9bb386fb6b2e3","text":"This should be fixed. I've sent out .\nThe natural question that follows will be \"which period of time\", but I think that is already.Thanks, LG"} {"_id":"q-en-quicwg-base-drafts-bb216150e7b688c2675247295d5cd2639066db8625a3779c082cc63606dd804f","text":"This was a little tricky as the current text already had what we wanted. But it wasn't quite clear on rationale, so I added some of that.\nÉric Vyncke said:\nI believe that the view was that a PRNG was inferior to a PRF because it required state, but yes, it should be fine. Should we have avoided repeating advice from 6437? (I forget who contributed this text, but I think that it was an RFC 6437 author.)\nText originates with Christian NAME in . A few people have touched it since for editorial purposes, but that doesn't change the point of what you're asking about."} {"_id":"q-en-quicwg-base-drafts-3b2f391ea494fdcfed3c714b00a845bf3d8f6e8dade9b7e12e210eb81c9e2d2d","text":"Éric Vyncke said:\nI think the ambiguity is whether the \"limited to\" is saying: the maximum datagram size is no more than twice the maximum datagram size (which is nonsensical) the initial congestion window is min(10maxdatagramsize, max(14720 bytes, 2maxdatagramsize)) Given that one of the possible interpretations is logically inconsistent, the other one must be correct. I'm not sure how to make this sentence clearer except through pseudo-code. Looking at the pseudo-code, there isn't any -- kInitialWindow is declared to be a constant, value defined by this text, but the definition of this constant is given in terms of a variable (maxdatagramsize). Maybe kInitialWindow should become a variable itself, and then have pseudo-code with the max/min relationship added to B.3?\nThis is clearer; thanks."} {"_id":"q-en-quicwg-base-drafts-804e5899d677f62fbff964740e5b08a712678a6431f09751ec52c617b85a2060","text":"Section 9.3 of Transport(URL) says: \"If the recipient permits the migration, it MUST send subsequent packets to the new peer address and MUST initiate path validation (Section 8.2) to verify the peer's ownership of the address if validation is not already underway.\" In section 9.5(URL) \"Similarly, an endpoint MUST NOT reuse a connection ID when sending to more than one destination address. Due to network changes outside the control of its peer, an endpoint might receive packets from a new source address with the same destination connection ID, in which case it MAY continue to use the current connection ID with the new remote address while still sending from the same local address.\" So as long as it's an unintentional change, everything is clear. But if the destination address changes and the incoming CID changes, but the sender(ie: server) doesn't have any more CIDs, it's not clear what should be done, as the two MUSTs seem to contradict one another. Or maybe they imply the server can't send anything?\nLater in 9.5, there's also this text that is relevant, but I don't think it clarifies how to resolve the MUSTs. \" An endpoint that exhausts available connection IDs cannot probe new paths or initiate migration, nor can it respond to probes or attempts by its peer to migrate. To ensure that migration is possible and packets sent on different paths cannot be correlated, endpoints SHOULD provide new connection IDs before peers migrate; see Section 5.1.1. If a peer might have exhausted available connection IDs, a migrating endpoint could include a NEWCONNECTIONID frame in all packets sent on a new network path.\"\nSo the second quotation has that clarification (with emphasis): Isn't this all coherent?\nYes, that clarifies the issue when the CID does not change. But that does not clarify the case when the CID does change and the responder has no more CIDs.\nIsn't that addressed by this: ?\nSo the answer is: 1) It can't send anything 2) It can send on the old path, but not the new one\nIt can probe the old path - but that is all it can do.\nWhy would it probe the old path that it's already validated?\nI didn't say that it was useful, just possible :)\nI don't see a pressing reason to do anything about this. That said, it's an unfortunate edge case, perhaps we could call it out explicitly, mentioning that this shouldn't arise if the other parts of the migration machinery are followed?\nTo be clear, I'm not looking for a fix as much as clarity on what is supposed to occur. I'll write a PR that I think will clarify it some and call out the edge case.\nI'd argue this is really a MUST unless case, but people tend not to like those, so I tried to stick to the facts in my PR.\nWFM"} {"_id":"q-en-quicwg-base-drafts-0731ba321efd8bd48d4e62d21fec228a95ad69fe232d2f3a830cff8ed24a5302","text":"to the extent I'm willing to do so.\nSec 4.1 and 4.1.2 do not explicitly define , , as stream errors. A.4.1 says that is a stream error. It would be nice if they are explicitly defined as stream errors in Sec 4.1 and 4.1.2. Also, it would be nice if each error code is categorized to either a connection error or a stream error in Table 4.\nCommenting as an individual. The error code space is shared between connection and streams. The action associated with a stream error or connection error cause QUIC transport frames to be emitted, which can use a code. H3 inherits H2's design philosophy that endpoints can generate a connection error at will. So while there are specific cases that describe \"treat X as stream error foo\", endpoints are also allowed to \"treat X as connection error foo\". Constraining the error codes to different uses has a certain appeal. But then what do we say the the receiver of an error code of the wrong type should do? Generate its own error? I think this has come up a couple of times during H3 drafts but I'm satisfied that the flexibility we have today is about the best we can do.\nI also think \"stream error\" and \"abort a stream\" are unique and consistent terms used through the document. But I'll let Mike spec to that point with his editor hat.\nEven though categorization is not reasonable, Sec 4.1, for instance, should say like \"the server SHOULD abort its response and treat as a stream error of type H3REQUESTINCOMPLETE\".\n\"Abort\" is a defined term in section 2.2. I personally don't see much clarity in the proposal, it threatens to add inconsistency to other places where streams are aborted due to application decisions rather than a protocol error.\nMy example proposal is to replace \"the server SHOULD abort its response with the error code H3REQUESTINCOMPLETE\" with \"the server SHOULD abort its response and treat as a stream error of type H3REQUESTINCOMPLETE\" in Sec 4.1. I don't understand why this would introduce inconsistency.\nNAME For that particular instance, I think it is not unreasonable to change the text from \"SHOULD abort its response with the error code ...\" to \"SHOULD abort its response stream with the error code ...\", as the latter is the pattern that we use. Though I am not sure if such change is necessary; I tend to agree with NAME that the sentence is already clear; \"abort\" is a defined word and use of \"response\" implies a stream.\nWRT inconsistency, the editor would need to sweep for all instances of abort and reason about whether they need additional text or not. And there's a risk that someone later on picks up on an inconsistency of usage because they didn't appreciate the nuance. Since adding more detail after an instance of seems duplicative, I would favour not adding any more.\nI'm okay with NAME addition of \"stream\" since abort is defined as a stream operation. I don't think we need to separate out \"abort\" and \"stream error\" as separate statements, because abort already describes what needs to be done.\nIf I was worried too much, no change is necessary. One alternative approach is to add a sentence like \"errors in this section are stream errors\" in the beginning of Sec 4."} {"_id":"q-en-quicwg-base-drafts-e3c560be66a567b44621380df3be701a344b51f781c731dbe504a6585bd2682d","text":"Text and pseudocode have been updated.\nI just updated the pseudocode as well, since it's quite straightforward.\nWhen the PTO is not armed due to the amplification limit and then a packet is received: URL, it's important to send data for the PN spaces the PTO is armed for(Initial or Handshake) prior to new application data. Typically PTO suggests writing new data instead of retransmitting data which may or may not be lost, but that's not true during the handshake. This is only an issue for the server, since it's caused by enforcing the amplification limits prior to address confirmation. This is most evident when 0-RTT is accepted, but address validation fails, since otherwise a server would not usually have enough data to write to create a problem.\nI went and double checked the msquic code , and we always re-enable the loss detection timer when we become \"unblocked\" by amplification protection logic. I thought this logic was already driven by some statement in the spec. I just went back and found this: Do we need any more than this?\nThe potential issue is that if the PTO is armed, but should have already expired, it's important to execute it prior to sending any pending data. I'm working on a PR to make this clearer, and we can whether the extra clarification is necessary?\nI wonder whether there is something else going on. Amplification or not, 0-RTT packets can only be acknowledged by ACK frames in 1-RTT packets. Those can only be received after the client gets the 1-rtt keys. So implementation should just start a short timer after the handshake completes.\nThis seems like an editorial change, please confirm. Either way, lets ensure NAME has an opportunity to comment before landing any changes.\nngtcp2 has dedicated code for this situation: URL I feel this is a kind of optimization or implementation techniques.\nIn response to NAME If the WG establish a consensus on the need for a change a particular fix then this PR can be applied in the AUTH48 with AD approval. I think we need to figure out if not merging the PR until AUTH48 is the best? It makes it simple to provide the changes to the RFC-editor, but maybe makes it harder for the rest of the community to keep track of the changes if there are no editors copy that contain all agreed things. However, considering the hopefully limited time that may make sense.\nWe can tag the versions that entered the RFC Editor queue, and then generate a diff against those at AUTH48.\nThat tag already exists, so we are good.\nAt the moment, states, quote: \"Clients MUST ensure that UDP datagrams containing only Initial packets are sized to at least 1200 bytes, adding padding to packets in the datagram as necessary.\" I think this is incorrect and that \"only\" should be dropped, as it excludes a UDP datagram that contains an Initial and a 0-RTT packet. Pointed out by NAME in .\nTechnically a design change. I think that it should be any packets that contain Initial and don't contain Handshake or 1-RTT packets. At least, that's what I implemented.\nI'll note that I think there are other requirements formulated in other sections. Perhaps define a term such a minimal initial datagram size and have the requirement in one place rather than sprinkling 1200 bytes in multiple places where it can be difficult to get the finer points (such as UDP vs QUIC packets and with or without header size). For example section 14 in transport:\nNAME What you implemented should be fine. However, Initial + 0-RTT coalesced in a datagram should be padded I believe and I don't think we're clear on that today.\nIt might be simpler to require every UDP packet that contains an Initial be padded to at least 1200? I'm worried that only requiring that of Initial if there is no Handshake/1-RTT present might be tricky on the receiver: what do I do if I receive a non-padded Initial with Handshake but I can't decrypt the Handshake data?\nNAME What we are talking here is a requirement specific to clients, and I am not sure if a client has a chance of sending Initial and Handshake packets at the same time. Quoting of the TLS draft, \"a client MUST discard Initial keys when it first sends a Handshake packet.\" And even if there is a possibility of a client sending Initial and Handshake at the same moment, I do not think requiring the client to pad a packet that contains only an Initial is correct, as coalescing is an optional behavior to the send side. It would mean that clients coalescing the QUIC packets would send a non-padded datagram, while clients not coalescing would send a padded datagram. That seems like an odd behavior.\nAnd yet another issue gets sucked into the wormhole that is the key discard mess. I guess that we will have to make a final determination based on the outcome of that stuff.\nOn consideration, I think that the requirement is simple and we don't have to stress about interactions with discarding keys. If the packet contains an Initial, it's possible that the Initial packet is the only packet that can be used by a server. Therefore, the datagram should be padded to 1200, regardless of whatever else is included. So we strike the \"only\" and we're good. See 6cfcbe26385f17072ccd330aea619f65bf3bdcb4.\nI wonder - if 0-RTT does not have huge certificates, would it not make sense to permit small packets in this case, provided the auth token validates. But in other cases 1200 bytes must be provided. But since 0-RTT might fail, this could then require a retransmission. Resource constrained devices might be prefer the latter.\nNAME Regardless of 0-RTT, for bandwidth constrained deployments, it makes sense for a client to always send a small packet and let the server validate the path. That said, the chartered goal of V1 is to reduce the connection establishment latency, and we have explicitly decided to require clients to send full-sized packets until the path is validated. I do not think we should revisit that design. Maybe we can in v2.\nI do not believe that this is necessarily correct. The reason is that we have not yet nailed down Initial key discard and in at least one version, an Initial ACK would appear in the client's second flight, but that need not be padded.\nI think the specification should mandate padding of server initial packets as well. When I was traveling a couple weeks ago, I was on a network where, when using IPv6, it allowed 1200-byte UDP packets through from client to server but blackholed them from server to client. Since our implementation pads initials in both directions, we were able to detect this failure during the handshake and fall back to TCP/TLS. If the server didn't pad its initials, we would have succeeded at the handshake but never gotten an HTTP response.\nThis definitely seems like a design issue. The purpose of the padding rule is solely to limit amplification, not to establish a minimum MTU. If you want some sort of MTU rule, that should be discussed separately.\nThis issue is marked as design. Are you suggesting I should open a separate one?\nNAME I believe it is dual purpose: From the current transport draft: >The payload of a UDP datagram carrying the first Initial packet MUST be expanded to at least 1200 bytes, by adding PADDING frames to the Initial packet and/or by coalescing the Initial packet (see Section 12.2). Sending a UDP datagram of this size ensures that the network path supports a reasonable Maximum Transmission Unit (MTU), and helps reduce the amplitude of amplification attacks caused by server responses toward an unverified client address; see Section 8.\nYeah, that's dicta as far as I am concerned. I don't think we ever had consensus that we were going to enforce MTU here.\nThe insight I had here was this: If you are sending Initial, it is possible that this is the only packet in the datagram that can be processed by your peer. If that is the case, then a client needs to ensure that it pads the datagram as though no other packets were present. You can infer from the fact that you have Handshake keys that the server is able to process Handshake packets, and then extend that to say that if you include Handshake packets then the server won't need padding. But then you can extend that inference to saying that there is no need to send any Initial packets (as the current draft does; currently, you are not required to keep Initial keys once you are sending with Handshake keys). Either way, I figure that it is better to keep things simple and say \"if Initial, pad\". In those cases where padding ends up being excessive, it's only excessive for a short while.\nI don't know where you get this from. It's total datagram size that matters, not the contents of the rest of the datagram (which is why the neqo padding strategy works). Anyway, the algorithm you propose seems fine, but it's not necessary, and therefore there's no need to specify it in the spec.\nThis was my thought too.\nI'm not seeing a conflict between these statements. If the packet contains an Initial, the total datagram size must be at least 1200 bytes. You can achieve this by padding any packet in the payload; I don't think anyone is advocating for requiring the padding be in a particular spot.\nI don't understand what \"pads the datagram as though no other packets were present\" is supposed to mean, then. The way neqo does this is that it puts the packets in the datagram and then pads the rest, without any regard to what keys the server might have.\nImprecise wording on my part. \"pads the datagram as though no other packets were present\" might be better as \"as though no other packets can be processed by the server\". Let's keep in mind that the proposed changes is just striking \"only\" in \"Clients MUST ensure that UDP datagrams containing Initial packets have UDP payloads of at least 1200 bytes, [...]\". I agree that this will result in larger packets than are ideal in a couple of cases. But the ACK-only Initial packets a client sent are most likely coalesced with other packets outside of cases where the padding is genuinely required.\nI'm sorry, but I still don't see what benefit this change provides. Perhaps this would be easier next week.\nThe change makes it clear that if you send Initial+0-RTT then you should still pad to 1200. That's all. Previously, you would not have been required to pad, which would have been bad. Most of the debate and discussion is about the knock-on consequences of applying the broader rule, which I don't think are that important. And yes, the whole \"when do you discard Initial keys\" thing does confound things a little.\nDiscussed in Cupertino. The word \"only\" is to be removed from the respective sentence. Moving to Editorial.\nThanks Ian."} {"_id":"q-en-quicwg-base-drafts-33e77dacdb2c1a48f142fc18f4a11a300f202c0f0885983823b425e3d17dd48c","text":"The names are capitalized, but we don't say \"the ... field.\" We probably should."} {"_id":"q-en-quicwg-base-drafts-2e6e030d9db326e16b26f01147564449df6b4d29750658a505e7f0be9a06e92d","text":"I've had to make some changes in the template to deal with some odd regressions in CircleCI (see URL for details). This is the updated configuration.\nYeah, given how this breaks in Circle; that's probably wise.\nNAME NAME do you think that you could stop following this project in CircleCI? Then we can merge this one.\nIs there a reason not to simply move this repo over to GitHub Actions as well?"} {"_id":"q-en-quicwg-base-drafts-226295e3516c58bf1f1f2b6f1695254ba7d37fc374f8eaba3c597ddc102ae280","text":"which also occurs in H3 but apparently wasn't caught.\nThroughout, the document uses \"SETTINGS\" as the first part of thte HTTP/3 setting names. We have updated this sentence for consistency. Please let us know if any corrections are needed. Original: For fomatting reasons, the setting names here are abbreviated by removing the 'SETTING' prefix. Current: For formatting reasons, the setting names here are abbreviated by removing the 'SETTINGS_' prefix.\nFixed in ."} {"_id":"q-en-quicwg-datagram-53329bf1c1bd1557291c8c8238ab849fef123afcde01682885e44d4ac2b91deb","text":"Defines the transport parameter as a unidirectional configuration.\nThe spec clearly states the following: But it's not completely clear what happens if both sides send different values. Is the value purely a unidirectional configuration? For instance, if the server advertises a value of 500, and the client advertises a value of 100, can the client still send 500 byte datagrams? Or does this essentially negotiate the max value either side can use to 100? If this is a unidirectional configuration, why require the peer to send the TP at all, if all they want to do is send datagrams, and not receive them? I'm loosely basing my thoughts on what the design could be on how we negotiate the number of streams an endpoint is willing to accept. Following that model, I'd recommend a design where, if an endpoint is willing to receive datagrams, it advertises a it's willing to accept. The TP has absolutely no meaning for the send direction. The protocol on top of QUIC decides how to interpret only a single direction allowing datagrams to be sent.\nI agree with NAME principly due to this clause in the present draft: My read of the current text is that it is unidirectional. An endpoint tells it's peer that it is willing to receive a DATAGRAM up to size N by using . Having asymmetric values is fine because the paths can be asymmetric and other TPs also behave asymmetrically. The draft also says: For an application protocol like siduck, both ends need to support reception of DATAGRAM or else the application will fail. For something like an IoT sensor feed, it might be fine to support a send-only/receive-only model. The current text basically requires the application protocol to mandate that is 0 on the side that is send-only AND to describe what happens if the TP is missing - that seems like it will cause a duplication of effort.\nI agree that we should make this purely unidirectional. I would resolve this by replacing with"} {"_id":"q-en-quicwg-datagram-0c36c919e72bc6caf088d1c5f0fafc3ad6ad39d5bfdceea7be6c521f741d9bd0","text":"The draft has some good text in the \"Acknowledgement Handling\" and \"Congestion Control\" sections that essentially is stating that DATAGRAM is just like any other ACK-eliciting packet, but is not automatically retransmitted by the transport. That's all reasonable, but I think there needs to be more text on how the (suspected) loss of a packet with a DATAGRAM frame or a PTO with only an outstanding DATAGRAM packet should be handled. I can see two possible models: It's just like any other packet. The goal is to elicit some ACK from the peer to get accurate loss information about the outstanding packet as soon as possible. If there is nothing outstanding we could use to send in a new packet, just send a PING frame. It's special. Because we don't necessarily intend to retransmit the data in the packet if it is actually lost, we don't actually care about immediate loss information/feedback. Don't force anything to be sent immediately to elicit the ACK. So far as I have it coded up in MsQuic, I've assumed (1). This essentially results in an immediate PING frame/packet being sent out if I have nothing else to retransmit to try to elicit an ACK for the DATAGRAM frame/packet. This could result in a slightly noisier connection if the app doesn't care about all loss information about their datagrams, but, IMO makes for a cleaner design. I don't know what the general consequences to congestion control might be if we don't do this. Assuming folks are in agreement, we should have some text on this topic in the draft.\nNAME NAME as authors of the recovery draft do you have an opinion here?\nI support NAME choice of 1 and it aligns with the recovery draft. It also matches our implementation. Adding clarifying text around this SG and I'd be happy to review it.\nI agree -- choice (1) is cleanest and I would argue that choice(2) is really not principled.\nLate to the issue, but I think (1) is the right choice here. It's not simply that datagrams aren't retransmitted, but that they might be retransmitted by the application if they're declared lost. If you don't have prompt feedback, that scenario suffers.\nSeems like we have good agreement on this, just need to write the text\nNAME -- To NAME 's comment at the mic, you should cite .\nThis text sounds good, though I'd love a little bit more text mentioning the benefit of this probeThis text is fine with me. I am wondering if we should say something additional, like that packets with datagram frames shouldn't be treated special, with respect to loss recovery."} {"_id":"q-en-ratelimit-headers-40e7d30d182b4b0b29ccd24e08c6a3c176a2aee87ee88865418f1e208b6e52f3","text":"Section 2.2: Section 2.3: This appears to be a circular definition -- what's actually being defined here?\n(noting that 2.2 doesn't actually define quota-units)\nQuota unit is now defined as the measurement unit for service limits.\nPTAL - I tried to rearrange to address this a bit more holistically.\nLooks good!LGTM"} {"_id":"q-en-ratelimit-headers-19a3eddecbff179607b62df9f3d4648c4932a9314b6d49312f6a864e46d83c1f","text":"Improves terminology\nIn other parts of the document we still use client as we are not sure whether to refer explicitly to a User Agent or not. Ideas welcome...\nSee RFC 7230 terminology definitions (, not changing in the new specs, FWIW)."} {"_id":"q-en-ratelimit-headers-b26bbcb68200cdebb5ca10e71218380d209aab60c06eb7e1d9b35bdb5ae5adc5","text":"uses http-core semantics fixes Sections refs cc: NAME\nI'll wait for the new i-d-template image, but will do.\nPlease use either [RFCxxxx], Section yyy or Section yyy of [RFCxxxx]\nAddressed by now.\nIf you start using the new kramdown-rcf2629 section ref syntax, you may want to do that consistently..."} {"_id":"q-en-ratelimit-headers-47fb81b02164bea688e8cf51a48876dec146d0108f03eb1597d04b6970947f93","text":"Improves example description\nhttps://ietf-wg-URL The example with just RateLimit-Limit and RateLimit-Reset In it seems this is a case where the RateLimit-Reset value is not able to be used to determine how long one should wait, but is rather just a Limit definition ? Why not then just use RateLimit-Limit: 10;w=1 and omit the other two headers ? Is this to cater for the inevitable 429 to provide a RateLimit-Reset value at that point ? If that is the case I question the value of using the two headers in “successful” responses.\nThis example is related to an implementation that uses the \"old\" ratelimit field set, that do not support . We should improve the description.\n:+1:"} {"_id":"q-en-ratelimit-headers-eb0288b560a7bc01739ef1c0e80e6c9afc34e3e3c6f0188355318e1dd4a51405","text":"moves quota-policies from RateLimit-Limit -> RateLimit-Policyrequires WG discussion -\nGiven this hint from Mark URL I think it's fine to merge this PR (separate Policy from Limit) since there seems to be agreement to use different fields at least in this case.\nPerhaps best to continue discussion in that issue before merging this? While the change will happen in a similar vein, we might want to wait to flesh out further details.\nThis seems quite arbitrary for a MUST; what is \"too far in the future\" and \"too high\"?\nAgree, this is an tricky point. I think an implementation: MUST define internal threeshold of allowed values; MUST validate ratelimit fields according to those thresholds. I think we could borrow some text from Signature. See\nOK, but RFC2119 language is for interoperability, and this doesn't provide any; different implementations (or deployments) are going to have different values, and that may cause issues. It'd be better to defines precise limits if you think this is important enough to merit a MUST. Personally, I'd remove the 2119 language and just give guidelines about what implementations should consider when they're looking for abuse.\nOk, great!\nAddressed in\nThe draft today suggests putting quota policies, a list, into the header. But that's a sort of breaking change, because that header's equivalent is strictly a number (not a list) in most existing implementations. Could be kept as a plain number (like and ), and instead a new header like or some such can contain supplementary information? That way, the story for upgrading to this draft is simpler. Sorry if this is asked and answered.\niirc we considered this but we initially had some requests of conflating as much as possible in fewer fields. Probably it's a more interoperable choice. I am open in re-considering the idea, provided we get in touch with the current implementers including 3scale, kong and envoy cc: 3scale: NAME , kong: NAME , envoyproxy: NAME\nThere's no inherent problem in using a specific header for policies, and no issue in moving to use them from the 3scale side. The only downside is we're adding yet another header for rate limiting, currently 4 of them, which might raise a few eyebrows.\nNAME thanks :) do you think that splitting is a more interoperable choice wrt to the parsing implementation / usability? Do you see other practical drawbacks? if we don't get feedback on this issue, can you ping the other implementers you know?\nYes, it's easier for people to pick up a new header for this to apply a policy, since as it stands they might break when seeing extra stuff in the limit one by expecting a single value rather than multiple, as NAME mentions above. No, just dealing with people (maybe) opposing too-many-headers - I don't know if this is something that we can predictably foresee happening in the HTTPAPI WG or in the HTTP WG, but my intuition says \"likely\" given how we were asked to \"condense\" this information, so I think we might want to spend some time making a good case out of it. NAME other than potential breakages in pre-existing implementations, is there any additional point we could make to justify the additional header? I think they are all currently listed - that being said I think porting to a new header should be an easy task, so hopefully not a problem.\nif almost everyone is implementing just the minimal specification, probably this could even simplify the addition of ...\nThe primary motivation is breakage, as you suggest, NAME The secondary motivation is ease of \"upgrading\" to adoption of quota policies, as NAME suggests. The flipside of \"easy to adopt, easy to upgrade\" is that the spec would also make it easy to never upgrade at all. This is a good thing, supposing that most services can operate perfectly fine without quota policies. A third, much weaker argument, is that the current approach of putting the \"limit\" and \"quota policy\" in the same header is that it's confusing. I would need more time to really state this argument more clearly, but what rubs me the wrong way here is that the current design basically puts two different kinds of data in the same list. As it stands, is a list of a numbers, but the first number (the \"limit\") is not like the rest of the numbers (the policy). The first number cannot have SFV parameters, but the others can (and typically will). It's an accident of SFV's syntax that this is even conceivable as an approach. For implementors, this means that does not simply deserialize as some sort of . It deserializes into a list, where the first element is an and the tail is a . I fear this will at minimum confuse folks, and perhaps even lead to incorrect implementations, such as people not realizing the first element of the list is not a quota policy at all.\nGiven URL it seems there's some consensus in providing quota policy in a separate field."} {"_id":"q-en-resource-directory-c955419a38115de312f9189507679ebc200d8f7bbe61a0e7f71f03ddaf277fe4","text":"This contains what I think of the easy fixes to the requirements of replicated RDs. See URL for more discussion, please comment on whether you think those changes are acceptable.\nThe minimum for interoperability IMO is that client's don't always pick the first choice -- because if they do, no fault tolerance is possible. Do you think that that even this is too much specification of behavior, or did I write more in the PR than what would be needed to satisfy that minimum requirement?\nchrysn schreef op 2018-02-01 17:03: Is that so? The first choice of one client is not necessarily the first choice of another when we are confronted with RDs which have contact problems. Several protocols have handled this problem: like dhcp for example. Why not copy that behavior?\nI think we're talking about different \"first\"s here. I'm not concerned with what happens when multiple servers answer the multicast CoAP request. (Led to an interesting chat with PIM people but is not a matter that should concern the RD). What I do think we should give guidance is what happens if the RDAO announces multiple addresses (something explicitly allowed in the current draft), or the URI discovery response contains more than one rt=\"URL\" link Both are within what I think we describe as acceptable server behavior. If clients are unaware and just always pick the first line (because it looks like a zero-or-one situation to the implementers), interoperability issues will arise when two are announced an the first one fails.\nOk, got it, see below chrysn schreef op 2018-02-02 12:11: Yeah, that's a nuisance. Someone seems to know what they want that for, annd probably have some intelligent algorithm to break ties.......... One might say that client MAY choose their RD randomly. I assume that is a multicast response? Then the first response is probably the closest and the best choice. And when one MC response contains multiple RD links, We are not responsible, and something clever is being thought out somewhere, we hope. I think DHCP specifies clients to wait so many seconds before choosing a server. When the first one fails, the client will try the second one, und so weiter ... What am I missing still?\nfor interoperability, the client behavior when multiple URIs are discovered, or the RD does not answer, need not be specified. The text starts to look more and more like Bonjour"} {"_id":"q-en-resource-directory-4f8d4b6e13be3fc2f8a633f44d8a5c37b00e3c680f62ed71f556f6c3ce4332df","text":"This allows for gateway devices that register their mapped legacy devices individually. Closes: URL The delta is smaller than I expected because at some places, paths were already allowed (eg. in the IANA table for \"con\"). One place where I'm unsure if we should change it because I don't know what it actually means is the \"scheme, IP, port\" annotation in fig-hierarchy -- what does it mean there?\nwhat is a path no scheme form? An example might help here.\npath-noscheme form is one of the possible ABNF forms for a relative reference from ; it basically means \"relative references that does not even start with a slash\". I'm conflicted about having an example in the draft (our examples already feel like a test suite), but I'll give one here and we can still consider moving it in: ~\nMerging this with a clarification for the ABNF item, so we have a single version to talk about later today.\nI'm aware this is a discussion we've already had, but back then we dropped it for its complexity and because we had no use cases. Matthias has brought up use cases for allowing a path in the con as part of the -13 reviews. He argues that a gateway bridging into a non-CoAP world would want to register its proxees (is that a word?) individually, and if it doesn't use name-based virtual hosting (which is kind of rare in CoAP and needs wildcard DNS records), could go for path-based hosting. As I see it, this would be a trivial addition from the RD point of view (\"just allow it\"). The onus is on the endpoint / gateway to do it right, ie. to never use path-absolute URI references unless they mean it. What does make this a bit more complicated is that none of the content formats we currently describe allows using that meaningfully -- but we are not limiting the RD to those content formats, and they may (and hopefully will) still change while we're finishing RD. On the editing track, I would keep those cases out of the examples (I'm game for building a test suite, but the examples shouldn't be that). Then, the editing becomes a matter of replacing \"specified URI MUST NOT have a path component\" with \"typically does not have a path component\", adding the one or the other \"typically\", and being done with it. (This would also make URL obsolete, as those components would be allowed). NAME (and possibly NAME I'd assume you won't be too happy with (based on previous discussions of that topic), but do you see Matthias' point, or could point out concrete issues you see from allowing this?\nI would prefer that not be made obsolete on this. You should be able to have a URI w/ a path and a relative URI which are then combined. If we need to combine things like query parameters then it is going to get really messy really fast because there are not good rules about that. I don't have any problems w/ allowing for the con to have a path and resolving with it. I just think that we need to establish what the rules are in the RD document rather than using the horrible link format rules.\nThere are very good rules about that query parts and fragments in the Base URI: Anything that is not an empty string or starts with a \"#\" removes the fragment identifier and the query parameters. The Base URI's fragment identifier is never kept. (Derived from RFC3986 Section 5.2). I'll try to write text on this issue on the weekend (NAME a \"no we can't have that because\" would be more appreciated today than later), let's talk about again when we have that. Dear list RD allows the \"con\" parameter to provide a base URI for relative resource URIs, when the registration is done from a different source address then the registered endpoint itself. \"con\" is restricted to the scheme and authority part of a URI, that is, no path segments are allowed. This came from the assumption that all CoRE devices will naturally correlate with socket-address endpoints. However, a registered device(=endpoint, \"ep\") might not be a natural endpoint. When using gateways or device proxies, the base URI will have path segments and multiple logically different endpoints will share the same socket-address (e.g., URL, URL, URL). This makes it expensive to move such logical endpoints, as updating \"con\" will not help. All links have to be removed, changed to the new prefix, and registered anew. This is a problem for some applications (including a bigger alliance using CoAP), in particular because the patch mechanism is still to be defined - yet rewriting URI prefixes still will not be really efficient even when having patch registration updates. I thus propose to generalize \"con\" to be any base URI that will be applied to all relative links an endpoint registered. If there is a conflict with a deployed mechanism, we could look into adopting \"anchor\" and let RD use it similarly to \"con\", but allowing full base URIs (including paths). Or we align with RFC 8288 altogether and replace \"con\" with \"anchor\": URL Ciao, Matthias"} {"_id":"q-en-resource-directory-4096d948873e1d3fb961fce391fa6b92c4af1b1cecdfe0ee8210e7aed629c4dc","text":"Closes: URL\nAlexeys' review mentions IANA considerations for .well-known/, and while we don't have any from the sentence he referred to, we are adding to behavior of .well-known/core in the simple registration -- grabbing its POST behavior. Do we need to update the registration entry for that (or update 6690 for its registration)?"} {"_id":"q-en-resource-directory-32bef8a51464c6c5a02632ed16d500cedf54eb860fa443e8a60db272c6dde287","text":"All expected to be uncontroversial; to be merged soon.\nCommit 3809be46013e9708a6584c4fb730e7cfecab5f22 should have been part of this as well, and is already applied to master."} {"_id":"q-en-resource-directory-958cbbcfd5d5e27cc91cdc47736839b3390a9d2736a4da82eece6f3c02eb5bb8","text":"as suggested by Russ in the genart review: NAME did I get the intention of the expression right?\nLGTM"} {"_id":"q-en-resource-directory-e3348fd9e0f63e09d108538f25c276a0170649827f5f727ac6908f899cc8e5bd","text":"See-Also: URL This is quite literally from my notes on URL A bit of uncertainty remains whether this change should be two-sided. Should the line above say \"When SLAAC is in use,\" (or \"When ND is in use,\"?) as well? Can RA options be used in presence of DHCP or static address configuration?\nUpdated to a more plain and not SLAAC related statement about the RDAO as per last interim. Thank you for the work put into this document. As you can noticed, I have cleared my 2 DISCUSS points (kept below for archive) thank you also for replying to my comments over email. BTW, I appreciated the use of ASCII art to represent an entity-relationship diagram ! I hope that this helps to improve the document, -- Section 4.1 -- It will be trivial to fix, in IPv6 address configuration (SLAAC vs. DHCP) is orthogonal to DHCP 'other-information'. E.g., even if address is configured via SLAAC, DHCPv6 other-information can be used to configure the Recursive DNS Server (or possibly the RD). -- Section 4.1.1 -- Another trivial DISCUSS to fix: in which message is this RDAO sent ? I guess unicast Router Advertisement but this MUST be specified. In general, I wonder how much interactions and exchanges of ideas have happened in the long history of this document with the DNSSD (DNS Service Discovery) Working Group that has very similar constraints (sleeping nodes) and same objectives. -- Section 2 -- To be honest: I am not too much an APP person; therefore, I was surprised to see \"source address (URI)\" used to identify the \"anchor=\"... I do not mind too much the use of \"destination address (URI)\" as it is really a destination but the anchor does not appear to me as a \"source address\". Is it common terminology ? If so, then ignore my COMMENT, else I suggest to change to \"destination URI\" and simply \"anchor\" ? -- Section 3.3 -- Should the lifetime be specified in seconds at first use in the text? -- Section 3.6 -- Is the use of \"M2M\" still current? I would suggest to use the word \"IoT\" esp when 6LBR (assuming it is 6LO Border Router) is cited later. Please expand and add reference for 6LBR. Using 'modern' technologies (cfr LP-WAN WG) could also add justification to section 3.5. -- Section 4.1 -- About \"coap://[MCD1]/.well-known/core?rt=core.rd*\", what is the value of MCD1 ? The IANA section discuss about it but it may help the reader to give a hint before (or simply use TBDx that is common in I-D). Any reason to use \"address\" rather than \"group\" in \"When answering a multicast request directed at a link-local address\" ? Later \"to use one of their own routable addresses for registration.\" but there can be multiple configured prefixes... Which one should the RD select ? Should this be specified ? As a co-author of RFC 8801, I would have appreciated to read PvD option mentionned to discover the RD. Any reason why PvD Option cannot be used ? -- Section 4.1.1 -- I suggest to swap the reserved and lifetime fields in order to be able to use a lifetime in units of seconds (to be consistent with other NDP options). -- Section 5 -- May be I missed it, but, can an end-point register multiple base URI ? E.g., multiple IPv6 addresses. -- Section 9.2 -- For information, value 38 is already assigned to RFC 8781. -- Section 2 -- The extra new lines when defining \"Sector\" are slighly confusing. Same applies to \"Target\" and \"Context\". This is cosmetic only."} {"_id":"q-en-resource-directory-b6f7edcdf98a1c1df9b2d4c92deebd62e85cc65118699f3535188be4400a1f25","text":"... and a few other cases of example errors and use of unregistered names."} {"_id":"q-en-resource-directory-1df8d512d22b612934f94e99a1dbd86e986265b7017189d3468a2b33cade1660","text":"Authentication alone is never sufficient; with the First-Come-Fist-Remembered policy of , it should be easier to see how this is still authorization. See-Also: URL Thanks for addressing my DISCUSS and COMMENT feedback"} {"_id":"q-en-resource-directory-814e81dff78dd4f5bb53b33227fa893e677132faa817af05a450a97e97ef462b","text":"Closes: URL See-Also: URL\nFrom Benjamin Kaduk's comments: Is it? Paging NAME NAME and NAME as I honestly don't know. Trouble is: The way CTs come across in the examples, the CT goes in, does its job and leaves. But the RD's lifetimes are finite, and \"Registrations in the RD are soft state and need to be periodically refreshed\". So what happens when the time runs out?\nA CT can indeed come in later to assist in a reconfiguration of the network/applications, e.g. adding a device, removing, or changing members of a group, or changing parameters of devices. When the CT performed a registration on behalf of an IoT device the expectation is that a very long lifetime will be picked, such that the CT doesn't have to come back to refresh. This looks problematic in case the RD state is easily lost - e.g. suppose it reboots and loses all such entries? Then there may be no CT around to send the entries to the RD again. (I would assume in this scenario that the CT's registration are more \"permanent\" so they would survive a reboot for example. Can be restored back from a database. In another project I work in the CT can perform such persistent registrations that survive reboots, using a special 'max' value of lifetime that signals persistent registration.)\nThanks, with that I can at least phrase an answer to the immediate question. Would it, in the deployments you describe, be conceivable that the CT is recalled on site on extraordinary events, like when the RD host is migrated to a different system or address? On the reboots, I hope that we are OK with the current situation that an RD would persist its state when it can, and administrators would not use the more volatile variety when they intend to configure long lifetimes. (If we aren't, we'd need to start thinking about negotiation of lifetimes, and that'd bring a new aspect of complexity I hope to avoid).\nchrysn wrote: Hi. I thought I had commented on this somewhere already. A BRSKI-based \"CT\" would have a registrar which does not really leave. If we are talking about how I think that brski-async-enroll will work, then the CT will enter the no-internet-zone, do some connections, and then leave. (Think new home construction in a new suburb) The RD lifetimes could be finite, but a few years in duration. The home owner, upon getting the keys (physically and cryptographically), would of course, rekey the entire house, and now there would be a management system to update RD lifetimes. Thread has a CT. I have no idea how it works, but perhaps NAME could comment. Zigbee CHIP project is doing something, but it's all NDA for now. I think that, when the time runs out, if there is nothing else to manage the system, then it should fail. I think that's okay. I think that the timeouts won't be minutes, but months... yes, there are issues. I don't think that the RD document needs to cover them though. {Feel free to invite me to collaborate on a script for the sequel to Sneakers/Leverage/MissionImpossible/EnemyOfTheState. I suggest it star Tom Greene rather than WillSmith or TomCruise.}\nI think that there is probably a followup document that provides a way to backup/restore RD host data. OPC UA told me that they have created a spec for device independent backup/restore of configuration data in the case of device failure, but I don't have the details yet.\nThis might fit into -- any exchange format between the RDs could also be re-used for persistence. (With the caveat that the receiving server would need to inhabit the same address as the original server, or otherwise it could not service registration updates.)\nWell, could it be an anycast address so that the RD could be highly available?\nI don't see this as a viable design for a deployment. A change of system/address/network in the backend should never require the CT to come in and redo commissioning operations on IoT devices. Feeding the 'new RD' with stored information from a database is what would be most useful here. This database could be all those entries of devices that the CT had scanned initially, not necessarily the full RD state. Or alternatively all RD entries with lifetime e.g. > 48 hours could be restored; other devices need to re-register. Those entries that were registered with the CT could have a 10-year lifetime for example.\nHI all, There are many installation firms, all with their own procedures and tools. It is possible that an installation company deploys a CT during installation and takes it home afterwards. The contract might stipulate that he leaves behind a program to restart, update or refresh the network. It might be that an another company does the maintenance with another set of tools. Concerning duration, we may talk about years of operation without interruption. On other sites weekly upgrades may be necessary. At the end of the day, There are no general rules that may fix the CT behavior. I am afraid this does not help a lot and emphasizes that the CT example is just a likely example. Cheerio, Peter Esko Dijk schreef op 2020-10-21 09:02: Links: [1] URL [2] URL Thanks for addressing my DISCUSS and COMMENT feedback"} {"_id":"q-en-resource-directory-fb4f6110ea9ae3a5c021810487c5f0e30e2ebc2951691eefda7b8d8b13c6ec08","text":"The SHOULD is not an interoperability requirement but a quality-of-implementation level recommendation; the MUST could have been read as a requirement to have parameters there and is more describing mechanism than compatibility. See-Also: URL Hi, Thank you for this document. I'm glad that the shepherd's writeup indicates that there are implementations since that indicates that there does appear to be value in standardizing this work, despite its long journey. My main comment (which I was considering raising as a discuss) is: Since this document is defining a new service, I think that this document would benefit on having a short later section on \"Operations, Administration and Management\". This document does seem to cover some management/administration considerations in various sections in a somewhat ad hoc way, but having a section highlighting those would potentially make it easier deploy. Defining a common management model (or API) would potentially also make administration of the service easier, I don't know if that had been considered, and if useful could be done as a separate document. A few other comments: What happens if an endpoint is managing the registration and is upgraded to new hardware with a different certificate? Would the updated endpoint expect to be able to update the registration? Or would it have to wait for the existing registration to timeout (which could be a long time)? Nit: Perhaps reword the second sentence. Otherwise it seems to conflict with the last sentence of the prior paragraph. The \"SHOULD NOT\" feels a bit strong to me, and I would prefer to see this as \"MAY NOT\". In many cases, if the configuration is not too big then providing the full configuration makes it easy to guarantee that the receiver has exactly the correct configuration. I appreciate that there are many cases where from an endpoint perspective it may want to keep the update small, but if I was doing this from a CT, I think that I would rather just resend the entire configuration, if it is not large. Just to check, is it correct that the anchor in the http link is also to coap://? If this is wrong then there is a second example in the same section that also needs to be fixed."} {"_id":"q-en-resource-directory-27f98938ee9f6721f145416ee13d07e7b183226c5a93d3940633b32230ca2f17","text":"This provides two band-aids to URL, and may also close it far enough that RD can progress without the underlying topic being solved in full.\nThis is a tracking issue to cover the topics of URL -- see there for discussion, the issue is primarily kept for cross-linking PRs.\nSince this is covered well enough that it need not stop RD. Further exploration should happen in ACE and CoRE. Hello CoRE, one of the outstanding issues in RD is server authorization with respect to operations on particular paths. This continues an ACE thread[1] I regrettably missed to follow up on, but simplifies the examples based on the recent interims. While originally phrased as the attacker tricking the server into using the client's good credentials for something the client did not intend, here they are phrased now in terms of server authorization as relevant for RD discovery and RD security policies. For two examples, a client needs to trust a server to provide a service, but has received misleading hints that cause disagreement between the client and the server about the performed operation. Example 1 An RD with a \"links are confidential\" policy also operates a bulletin board at . During the unprotected discovery phase, the endpoint is fed a link . The client connects to URL, verifies that its server credentials are good for an RD that will reveal links posted to it only to authorized lookup clients. It then posts all its links to /bb, where anyone who knows to use a bulletin board can read them. Example 2 A time server also exposes its current core temperature as . A malicious RD operator modifies the time server's registration to read . A starting device in the network asks the RD for any rt=unixtime, verifies that URL is indeed authorized to give the right time, and wakes up on the morning of January 1st, 1970. Problem summary The client has an abstract intention with an operation, which it checks against the server's authorization. Then it forms that intention, with additional unverified knowledge about the server, into a request, which it protects. The server then reconstructs that intention with its authoritative knowledge of itself, and uses any client provided credentials to act on it. If the client's knowledge about the server is erroneous, the intentions are misaligned. Neither is the intention protected, nor does the client point out the claims it wants to hold the server to in its request (or, in the original examples from the ACE thread, express the precise claims about itself it intends the server to assess). Solution approaches In the ACE thread back then, Jim said that he'd expect the client to use different keys (originally in talking to the AS, and thus different keys in talking to the RS). Doing that to the end would probably result in the devices having to handle a lot more keys, something like one for any operation that could be mixed up with any other? Michael in the last interim mentioned that in OAuth it's common to only run one service on one host for the very same reason. Kind of boils down to the same thing, as virtual-hosting these services would create multiple keys. My approach in the -25 RD Section 7.2 is to (losely, there; would need stronger and more precise wording) say that any information that any information using which intention is encoded in the request in a lossy way is to be rechecked from the origin server. That'd be more or less straightforward for example 1, but incur an additional costly recheck with the origin server about many things that could be learned efficiently from the RD. Does any of that solve the issues? Can we do any better? How do we put that into RD to satisfy the small comment about Section that opens this can of worms? Is there anything quotable on OAuth's best practices on multiple services on the same host? Best regards Christian [1]: URL To use raw power is to make yourself infinitely vulnerable to greater powers. -- Bene Gesserit axiom"} {"_id":"q-en-resource-directory-91ba2272401eaacc60627b904522db8d45253d3e2e9011e06e753537b8af8b53","text":"Some small mainly editorial fixes from Ben's review, to be merged soon.\nFrom Ben's updated ballot: Good point, queued up.\nFrom Ben's updated ballot:: → \"The RD (always, but especially here) then needs to verify any lookup client's authorization before reveling this information directly (in resource lookup) or indirectly (when using it to satisfy a resource lookup search criterion)\"\nFrom follow-up on Ben's initial DISCUSS: → \"regular registrations (i.e., per {{registration}})\"\nFrom follow-up on Ben's initial DISCUSS: It'd sound like abuse of semicolons to me; keeping it open to ask RFC editor for their technical writing expertise."} {"_id":"q-en-resource-directory-716f025c175d49e0acccc9ec19d47ebf6718d2620229814ec51720fd997bf2ef","text":"Closes: URL For immediate merging\n... and other terms defined in terminology. Best addressed after and the anchor changes are through."} {"_id":"q-en-rfc-censorship-tech-9f2e8ccafa57621911359d6b73e43d83e8999dfaf5a5764f3b342f268c539711","text":"I think there is a trivial typo in the first sentence of \"Trade-offs\".\nthanks!"} {"_id":"q-en-rfc-censorship-tech-63880a71c97fdfd4de66e7b0d118bfcd85a57341b519701b8d30728d599e454d","text":"s/prevention/interference/ ... Censors can degrade performance to a particular site, or for a particular service, effectively making performance so bad for a site that users opt to use a different site or service. China is known to do this. Censors can try to have content removed via takedown requests, under the guise of laws that forbid certain content (e.g., hate speech or whatnot). The ANC is .za has been accused of using the laws in this fashion.\nWill Scott also noted in feedback to me that throttling should get more of a call out than it currently does:"} {"_id":"q-en-rfc5245bis-b5cd4409765c0a452650b7c974d9b27157a8fc8b4f344164ace01f4c1805e5d4","text":"Ta and RTO settings are now the same for real-time media and non-real-time media. An appendix has been added, showing bandwidth consumption based on different values, and different ufrag value sizes."} {"_id":"q-en-rfc5245bis-18df5ec1119759300591667ddf2fe2b465b6fc8ea75944023f0749690f7da685","text":"The following issues are addressed in this commit: URL URL\nLGTM\nTo prevent one check list from starving.\nThis was discussed (and, as far as I am concerned, agreed) on the list 1st June. But, it's good to have a reminder :)\nThis one has not been addressed as far as I can tell. Now we get to decided if it's important enough or not to do something about :)\nI had a look at this, but for some reason it never got implemented. I think it should be an easy fix.\nThe changes section looks very small compared to how many changes we've made, so it definitely needs updating. Now we need to decide whether we have to update it before or after requesting publication.\nYes - we just have to remember everything :) However, I don't think we should go into details: the major changes (that I can think of when writing this) are removal of regular/aggressive nomination, and the changes related to \"Emil's table\".\nLGTM"} {"_id":"q-en-rtcweb-overview-c71b8494d792b0e174eccbccf56094298fd9a4b21d1735aeb9aa77e0e4d0d165","text":"Made changes to address Warren's first and third, but not 2nd nit. I'm unsure whether the changes in the spacing I introduced will fix the odd spacing in Olle's name.\nWarren noted the following nits in his ballot: 1: \"Development of The Universal Solution has proved hard, however, for all the usual reasons.\" -- this is cute, but leaves people wondering what \"all the usual reasons are\". Perhaps just \"Development of The Universal Solution has, however, proved hard.\" (or just cut after the \"however in the original\"). 2: I'm not sure why you have \"Protocol\" in the terminology section. It doesn't seem like it is useful for the document, and this document doesn't seem like the right place to (re) define it. 3: Acknowledgements: Funny spacing in \"Olle E. Johansson"} {"_id":"q-en-rtcweb-overview-2c7568696a410e860319c25dabdd7a0b1480d697c1a3f6fe605ce753f131fd87","text":"Spencer had the following comment: I'm not sure how tutorial you want section 4 to be, but I'd at least mention appropriate retransmission and in-order delivery, in addition to congestion control, since you get that with SCTP on the data channel."} {"_id":"q-en-rtcweb-overview-d39635abf99490885bfc724e1820bfed430f5e5e194bd44ee9df30d642be01c3","text":"EKR entered the following DISCUSS: Your citation to ICE is to 5245-bis, but at least the JSEP editor consensus was that WebRTC depended on 5245, so this needs to be resolved one way or the other.\nSean asked me to leave this one open. Personally, I'd somewhat prefer to 5245bis because it separates out the SDP machinery, but JSEP has a specific section reference (section 15.4) that has to go to either 5245 or to mmusic-ice-sip-sdp, and one could argue that we need to pull in -sip-sdp here too. I'm OK with 5245 as a reference, but will let the chairs make the call.\nI no longer care about this distinction."} {"_id":"q-en-rtcweb-transport-59c0e12ecbf6363dec5e18c957a182b44e2e7a151c5b2f79f7fb74f3ca0e3d2a","text":"reformulated the TCP point, addressed security with a reference to -qos.\nFrom Ben Campbell's review:\nSecond point is fixed by . I'm not sure the third point is an improvement.\nSSL is used exactly once, as part of the acronym \"TURN/SSL\". This is also the first mention of TURN, but the second was much more convenient to attach the expansion to. Given that it was so awkward to attach them, I included a new section listing the \"used protocols\" instead.\nFrom Magnus Westerlund in IETF LC: Section 8.1: [I-D.martinsen-mmusic-ice-dualstack-fairness] Martinsen, P., Reddy, T., and P. Patil, \"ICE IPv4/IPv6 Dual Stack Fairness\", draft-martinsen-mmusic-ice- dualstack-fairness-02 (work in progress), February 2015. Can be updated as this is now an WG item in ICE.\nUpdated in\nFixed by\nI have reviewed this document in preparation for IETF last call. It is in good shape and I have requested the last call. While the last call is ongoing, I would like to double-check that this document (particularly Section 4) raises no new security considerations that should be documented in Section 6. I'm wondering if there are scenarios where a malicious application could try to game the priority scheme to some ill effect? draft-ietf-tsvwg-rtcweb-qos talks a bit about this, so I'm wondering if anything needs to be said here. And I found a couple of nits to be resolved together with any LC comments: Sec 3.2 - Please update the citation to point to draft-ietf-mmusic-ice-dualstack-fairness-02. Sec 3.4 - \"Third, using TCP only between the endpoint and its relay may result in less issues with TCP in regards to real-time constraints, e.g. due to head of line blocking.\" This is awkwardly phrased. I would suggest something like \"Third, using TCP between the client's TURN server and its peer may result in more performance problems (due to head-of-line blocking) than using UDP.\" And one bit I left out — if text is to be added to reference draft-ietf-rtcweb-return, that should be done together with addressing nits and LC comments.\nThere seems no consensus to add RETURN, so I'm skipping this for now."} {"_id":"q-en-rtcweb-transport-d4db569a2abc191074a19e240b157bc8fde757fcf47e554bd5899b2a3700b0e1","text":"NAME please review\nFrom Magnus Westerlund in IETF LC review: Section 3.4: If TCP connections are used, RTP framing according to [RFC4571] MUST be used, both for the RTP packets and for the DTLS packets used to carry data channels. I think the last part of this sentence, unintentionally excludes the DTLS handshake packets. The ICE TCP spec also specifies that the STUN connectivity checks are using RFC4571 framing. Thus, I think some reformulation of this sentence is in order."} {"_id":"q-en-security-arch-10bf1bca901b8f66f9272baa2ea8b8d623c270016151adbe4f2b9330cfcb7d9d","text":"Just saving some typing because this section will eventually need to match what's in RFC8174."} {"_id":"q-en-security-arch-8f9acac3c47187e956ae13ea556433c14443e9560c2f05b016f8f0ee0cb83f35","text":"NAME PTAL\nThe current text on user identity covers the domain portion well enough, but doesn't really say anything about how the part preceding the 'NAME is handled. There is text regarding the use of a local address book for the purposes of rendering, which might help avoid these issues entirely... where available. We may need to consider addressing the issue of confusable characters and whether we need to define the use of a particular normalization form. (See )"} {"_id":"q-en-senml-spec-1229d9115d7203b9d57179bfc0356dba1c3acdc583b0da56f50e6ad5f5f53041","text":"See email from paul L Nov1 to media-types Hello Cullen, it seems to me that all the content-types you have been describing about senML are candidate content types to be exchanged over the clipboard of a normal mobile or desk computer; e.g. from a reading app to another, … Do you agree on this? If yes, then I’d like to recommend to add: MacOS Uniform Type Identiers Windows Clipboard Name for each of the content-types. If there is earlier experience in this realm, probably this should be taken, otherwise, I can probably suggest some clipboard names. Standardising such at the Media-Type declaration level allows to set a reference frame so that other implementations can follow, just as the content-type name. For examples of including this in the media-type registration, see that of SVG or MathML. thanks in advance. Paul\nAt IETF 97 was \"probably should do this\" - see URL\nSee MathML at URL\nOn the UTI, currently only apple can declare ones in the public space ( see URL ). I do not see how to request one. There is a bunch of extra info apple would need to add to the OS to really make this work outlined in URL\nWindows clipboard info at URL(v=vs.85).aspx\nMakes sense; no need for a clipboard format for the streaming and compressed versions actually."} {"_id":"q-en-senml-spec-bdc8464c3e6ae57492519655751378e4b3fc0582d5b535fa3c8260715379e561","text":"This should now consider and . There are no perfect solutions for these, but the solutions chosen should be close.\nI don't agree wth some of this but agree with a bunch. I took carsten's changes for all the parts I agree with and put them in URL\nI might suggest we pull that PR and then look at what is left (on a bunch of them it is simply if there should be an * suggesting don't use or not) then we can rebase this PR and see what is left to resolve.\nOops, I just changed my PR to react to your comments before seeing\nPushed another version that has both / and % for the same thing, where % is marked with a *\nNow, instead of delete, it's \"not recommend\" and mentions kg as exception. I think this is good to merge.\nThe following units are questionable: % -- this looks like a percentage (0..100), but it is not (0..1). count -- what? This is about a dimensionless quantity? %RH, %EL appear to be percentages, but imply a quantity. EL may be a quantity, but the unit is second. In SI, we don't do new units for new quantities (there is no \"millimeter radius\" or \"nanometer wavelength\"). Bspl -- this really should be in dB (see ), SPL is the quantity again. beat/min, beats -- these should be 1/min (but see ) and 1 (dimensionless); how is beats different from count? We don't have bit (but bit/s), byte. 1/min is there only for heartbeats, not for revolutions (\"rpm\"). Should really be 1/s (but see ). lat and lon are a bit weird, too. (Celsius is also questionable -- the rationale of should call for all measurements being expressed in Kelvin.)\non on kevin vs C, every thermostat manufacture I can find has said they want C over kelvin. bunch of implementation are using 0 to 1 for light levels but imagine we could change % to 0 to 100 count - yep count :-) Bunch of sensors simply count pulses of who knows what. Totally agree this is not a unit but this is the pragmatic sort of thing needed EL is a great example of something that many sensors in sports have to deal with in common way. They use this and it works well for them\nSo I changed % to mean percentage. Cel of course stays in (even though it is wrong wrong wrong :-). I have marked count and EL with the \"not so recommended for new producers\" range. But we don't really have a base unit for \"1\" (dimensionless) either. The problem remains that giving a unit doesn't tell you what quantity actually has been measured, and the trend to make more and more finer grained units for the same kinds quantities measured in different ways (as in psi vs. psig etc.) leads into the woods.\njust finding all the email now … I think the thing that people like about this is it’s not a SI unit, it’s something that tells you something useful about what you are measuring across a wide range of devices. I think we need to think of these from a point of view of broadly useful than are they a SI unit. Agree with your slippery slope psi, psig, etc argument but all of those exist for a reason.\nThe SI unit for mass is kg. says: That leads to being unable to use the SI unit for mass, instead, we are using g. But then we are using derived units such as l (dm3) and l/s (dm3/s). So: We need to decide the above TODO. We need to decide whether to stick with SI units or allow simple names for derived units (gram, minute, liter). Domain-specific units are often weird (1/min = \"rpm\", mmHg or mol/dl in medicine, ...).\nI think a key part of the registry vs existing SI unit tables was to be able to use things that were not proper units but agree things can be cleaned up. Clearly rpm and bpm have the same unit in the sense of what the dimensions are but what they are used to measure are pretty different and knowing that it is bpm often provides a level of information about how to process the information. As related to our whole conversation on why we needed the IANA table, the ability to have things in this that are not really a classic SI unit is the key value of SenML. The rules are just guidelines. If we want to change the unit of mass from g to kg, we can do that but the point on the prefixes was not to have both g and kg as that just reduces interoperability"} {"_id":"q-en-sframe-f81072599758aba5a849e448f12f5c42012e6216a926b9f934381be7219f3a2c","text":"I think that the hash/auth tag list should be better moved at the end of the frame after the auth tag to make it easier to apply the encryption and generating the auth tag without having to reconstruct the frame to insert the signature in the middle of it. So, my new proposal would be first to generate the frame header with the s bit = 1, append the payload, encrypt the frame and add the auth tag as normal: Then append the tags before that frame at the end and the number of auth tags without the current frame one Then calculate the signature of the auth tags in reverse order: And then append the Signature at the end of the frame: When the receiver gets the frame, it will also be able to reuse the buffer to calculate the Signature without having to move momory around. In case it is not secure to use the auth tags, it would be still be similar: The receiver would have to generate the Hash of the frame up to the Auth tag, and then calculate the signature with the rest of the hashes on the list.\nAdding Signature at the end will mean adding a new offset field to point to where the Signature block start as it will be a variable length block (depends how many hashes it has), where if it was put at the beginning, we won't need to add an offset field.I think doing signature over the auth tag is fineYou are right about the startFrameIndex, it won't be enough. What about having startFrameIndex 4 bytes or whatever, then having one byte frameOffset for every other hash to get the frameindex by using this offset from the start index, assuming 255 is enough range. I just don't want to add full length frameId for every hash\ndidn't you said that: Why is it fixed if it is at the beginning, but variable if it is in the end? why do we need the at all? The receiver would just need to have an ordered map of the pending to be verified authentication tags and remove them when they signature for each is received. The receiver should have also a verification timeout and if a signature is not received for that tag in time, it would be considered as failed.\nOk, I see, ECDSA signatures are ASN1 encoded objects, so the length can be derived by parsing it from the start bite. Is that correct?\nI have made a new tentative approach with my comments above. What do you think?\nSorry I should have been more clear The signature itself is fixed length (ex 64 bytes for ed25519) but The entire signature block will be variable depends on the number of signature, unless you want to make this number fixed (ex always 50 hashes)We need frameIndex so the recipient know which hashes to use to verify the signature, we can't just relay on an ordered list of all previous hashes because the signature block could have hashes of missing frames, or even the signature packet could be lost itself\nWhat is a signature block? What is the difference between the signature and the signature block? That should be covered by my proposal.\nsignature = sign(data) Signature block is what I referred to on my drawing as signature header+ signature + hash list\nIf I understood your proposal correctly, you are sending the whole list of hashes so the receiver can recreate the if any of them is missing. So you don't really don't need the index of each frame, as you there should be (almost) a 1-to-1 mapping between the hashes/auth tag and the frame index (how probable is to have a hash or auth tag collision?). So, given the hash/auth tag list, the receiver could derive the frame index of it (by keeping an internal map)\nI think that's good for now. My initial thought was to have the sender chose the packet to sign, they can be different number each time, also they might chose not ti sign every packet, for example random X packets every Y seconds. Anyway, we can always improve later, your proposal is good to start with\nThis should be also possible with current approach, right? Have you seen any reason that it couldn't be done?\nIn the draft it says that: >when the signature is sent, it will also send all the hashes it used to calculate the signature How is the bitstream format of the signature+hashes, also how is the signature added to the frame (after the auth tag?) and how is the presence of the signature in a frame signaled?\nHow about something like this for the signature format? The signature is appended after the block if the bit of the SFrame header is 1. The first byte (SLEN) is the length of the signature data . After the signature there will be 1 or more blocks of HMACs. A they can be off different size, each HMAC block will have 1 byte header with HNUM (4 bits) indicating the number of HMACs present and HLEN(4 bits) to indicate the size in bytes of each HMAC. To indicate the end of the HMACS blocks an HMAC header block with HNUM=0 and HLEN=0 will be inserted.\nOne question, what are the sizes of the HMACs? I have considered the ones for AES which are 4 and 8, but in GCM are bigger, right? Are they required to be even or a power of 2? we could do either length=2^(HLEN+1) or LEN=2(HLEN+2) in that case.\nI don't think they have to be 2*n. SHA-256 is, of course, but SHA-384 isn't\nCurrently it says that: However, if using SVC, the hash will be calculated over all the frames of the different spatial layers within the same superframe/picture. However the SFU will be able to drop frames within the same stream (either spatial or temporal) to match target bitrate. In that case, the receiver will never get a full frame and be able to check the signature. Which will render it a kind of useless feature. An easy way of solving the issue would be to perform signature only on the base layer or take into consideration the dependency graph and send multiple signatures in parallel (each for a branch of the dependency graph).\nThis is a good point and suggests 'frame' really refers to each independently decodable unit.\nI think I mixed two things on this issue and my original comment is not totally accurate. The signature will contain the HMAC of all the frames it contains, so if any of them is dropped, the receiver can still compute the signature of the received ones. However, if the signature is not sent on the base layer, then it can be dropped by the SFU. I think it still makes sense to send different signatures over different layers to avoid having to send data on the wire that will be dropped by the SFU."} {"_id":"q-en-smime-444895715fa5b3e6169fa4469f0d490f9d5f70604e9e10ee7d229f440db18d89","text":"This adds a security consideration for padding. THis addresses issue\nWith the move from CBC to GCM as a padding mode, there is now no padding for a S/MIME message at all. We should probably add a security consideration dealing with the fact that in some cases traffic analysis might be able to get at the content of an encrypted message based on its size."} {"_id":"q-en-sniencryption-914239e60c23c93d6c2bd889a54ce30985eab73f57e510afa0bbeea46131cada","text":"…fix the various typos flagged by DKG, submit as WG draft."} {"_id":"q-en-stateless-a5dc70a9e28eda22dc0cd1846836b00c446dcc6263f83ddd8bbcb529421ef650","text":"the automated key management reference was made in error. RFC4107 is about systems like IKEv2, TLS, etc. where a key has to be agreed upon between two (or more nodes). The key needed for the encrypted state token never leaves the sender, so it can be a randomly generated key, provided that the nonce is never repeated. This text fixed the considerations for this section."} {"_id":"q-en-suit-firmware-encryption-68a4fb9ea0dd59ce34ddbc623537fa9b23f50464b2ceeaa732fc1dfdd44aeec1","text":"Thanks for your comment, Russ. Here is the issue: Only after the swap has been completed the plaintext firmware image is in the primary slot. Before the swap, the encrypted firmware image was in the secondary slot. It would be possible to do a \"dummy decrypt\" to compute the hash prior to doing the decrypt with swapping.\nYes, the AES-CTR and AES-CBC document in the COSE WG states that the signature provides the integrity protection, so this description needs to say how that happens.\nRuss, I have added text to address your comment."} {"_id":"q-en-suit-firmware-encryption-e7b5a3925cd52967bece723211b21c5504782654a620e158c151aeb17e6b7ae0","text":"Adding an example manifest using AES-KW in case where the Author knows the recipients (the Author and the Devices have the same pre-shared key).\nThis is a good example. I looked through it and it made sense to me.\nIn the firmware encryption draft we have defined an extension to the envelope that indicates what encryption parameters are applied to which component. Here is the CDDL: Currently the text provides no explanation or an example of how this linkage works.\nThis aspect is related to URL and to the mailing list discussion URL\nThis issue is also related to URL\nWith in PR , this issue is solved. Internet-Draft draft-ietf-suit-firmware-encryption-URL is now available. It is a work item of the Software Updates for Internet of Things (SUIT) WG of the IETF. Title: Encrypted Payloads in SUIT Manifests Authors: Hannes Tschofenig Russ Housley Brendan Moran David Brown Ken Takayama Name: draft-ietf-suit-firmware-encryption-URL Pages: 52 Dates: 2024-03-03 Abstract: This document specifies techniques for encrypting software, firmware, machine learning models, and personalization data by utilizing the IETF SUIT manifest. Key agreement is provided by ephemeral-static (ES) Diffie-Hellman (DH) and AES Key Wrap (AES-KW). ES-DH uses public key cryptography while AES-KW uses a pre-shared key. Encryption of the plaintext is accomplished with conventional symmetric key cryptography. The IETF datatracker status page for this Internet-Draft is: URL There is also an HTMLized version available at: URL A diff from the previous version is available at: https://author-URL Internet-Drafts are also available by rsync at: URL"} {"_id":"q-en-tls-exported-authenticator-80d776118aa174a7644b543bfe55fd4dfd303b9299f3eafd31e5fe460d2f2e4c","text":"In Section 3 of the RFC, it states that the authenticator request includes those extensions allowed for the CertificateRequest message. However, this does not mean it could only includes them and other extensions could also be sent. In the TLS spec CertificateRequest explicitly states \"Clients MUST ignore unrecognized extensions.\", we should have the same here."} {"_id":"q-en-tls-flags-b2f5a6d596ac8f7d264c92881b549ba236d03b883580c46875e31a5cb16ec743","text":"This proposed text disallows specifying a flag extension where the response is a regular extension with content rather than a mere acknowledgment.\nWell, I think of this as being the request extension. I can live with this if it's the WG consensus, but it seems like it would be useful to be able to say \"I support largeish extension X\" (we already have some such extensions) and it would be kind of sad if that took 4 bytes.\nSeems strange to have to state this, but that's OK. Do you want to explain why? That is, because this results in a \"response\" extension appearing without a \"request\" extension, which is not permitted."} {"_id":"q-en-tls-subcerts-af3afd6542a20048a6c8153279d54c8b53bfd744e128c13948d7a4b31f69975d","text":"Addresses .\n.Add some additional text to highlight that sig on server cert and DC can be different. from . Hi Sean, Thanks for the follow-up. I refreshed my memory as much as I could. Please see my responses in line. Yours, Daniel Sean Turner wrote: I think I would rather take the structure: 1 - Use case 2 - Problem description 3 - Solution I am fine with whatever fits best to you. Server operators often deploy TLS termination services in locations such as remote data centers or Content Delivery Networks (CDNs) where it may be difficult to detect key compromises. Short-lived certificates may be used to limit the exposure of keys in these cases. However, short-lived certificates need to be renewed more frequently than long-lived certificates. If an external CA is unable to issue a certificate in time to replace a deployed certificate, the server would no longer be able to present a valid certificate to clients. With short-lived certificates, there is a smaller window of time to renew a certificates and therefore a higher risk that an outage at a CA will negatively affect the uptime of the service. Typically, a TLS server uses a certificate provided by some entity other than the operator of the server (a \"Certification Authority\" or CA) [RFC8446 ] [RFC5280 ]. This organizational separation makes the TLS server operator dependent on the CA for some aspects of its operations, for example: Whenever the server operator wants to deploy a new certificate, it has to interact with the CA. The server operator can only use TLS signature schemes for which the CA will issue credentials. To reduce the dependency on external CAs, this document proposes a limited delegation mechanism that allows a TLS peer to issue its own credentials within the scope of a certificate issued by an external CA. These credentials only enable the recipient of the delegation to speak for names that the CA has authorized. Furthermore, this mechanism allows the server to use modern signature algorithms such as Ed25519 [RFC8032 ] even if their CA does not support them. We will refer to the certificate issued by the CA as a \"certificate\", or \"delegation certificate\", and the one issued by the operator as a \"delegated credential\" or \"DC\". I think my concern has been clarified by Ilari. It was resolved by your b). I am fine with both text. works for me. The text does not address my concern and will recap my perspective of the discussion. My initial comment was that the reference [KEYLESS] points to a solution that does not work with TLS 1.3 and currently the only effort compatible with TLS 1.3 is draft-mglt-lurk-tls13. I expect this reference to be mentioned and I do not see it in the text proposed text. If the intention with [KEYLESS] is to mention TLS Handshake Proxying techniques, then I argued that other mechanisms be mentioned and among others draft-mglt-lurk-tls12 and draft-mglt-lurk-tls13 that address different versions of TLS. This was my second concern and I do not see any of these references being mentioned. The way I understand your response is that the paper pointed out by [KEYLESS] is not Cloudflare's commercial product but instead a generic TLS Handshake proxying and that you propose to add a reference to an academic paper that discusses the LURK extension for TLS 1.2. I appreciate that other solutions - and in this case LURK, are being considered. I am wondering however why draft-mglt-lurk-tls12 is being replaced by an academic reference [REF2] and why draft-mglt-lurk-tls13 has been omitted. It seems that we are trying to avoid any reference of the work that is happening at the IETF. Is there anything I am missing ? My suggestion for the text OLD: (e.g., a PKCS interface or a remote signing mechanism [KEYLESS]) NEW: (e.g., a PKCS interface or a remote signing mechanism such as [draft-mglt-lurk-tls13] or [draft-mglt-lurk-tls12] ([LURK-TLS12]) or [KEYLESS]) with [KEYLESS] Sullivan, N. and D. Stebila, \"An Analysis of TLS Handshake Proxying\", IEEE TrustCom/BigDataSE/ISPA 2015, 2015. [LURK-TLS12] Boureanu, I., Migault, D., Preda, S., Alamedine, H.A., Mishra, S. Fieau, F., and M. Mannan, \"LURK: Server-Controlled TLS Delegation”, IEEE TrustCom 2020, URL, 2020. [draft-mglt-lurk-tls12] IETF draft [draft-mglt-lurk-tls13] IETF draft It is unclear to me what is generic about the paper pointed by [KEYLESS] or [REF1]. In any case, for clarification, the paper clearly refers to Cloudflare commercial products as opposed to a generic mechanism, and this will remain whatever label will be used as a reference. Typically, Cloudflare co-authors the paper appears 12 times in the 8 page paper. The contribution section mentions: \"\"\" Our work focuses specifically on the TLS handshake proxying system implemented by CloudFlare in their Keyless SSL product [6], [7] \"\"\" The methodology mentions says: \"\"\" We tested TLS key proxying using CloudFlare’s implementation, which was implemented with the following three parts: \"\"\" \"\"\" The different scenarios were set up through CloudFlare’s control panel. In the direct handshake handshake scenario, the site is set up with no reverse proxy. In the scenario where the key is held by the edge server, the same certificate that was used on the origin is uploaded to CloudFlare and the reverse proxy is enabled for the site. \"\"\" Cloudflare appears in at least references URL CloudFlare Inc., “CloudFlare Keyless SSL,” Sep. 2014, URL N. Sullivan, “Keyless SSL: The nitty gritty technical details,” Sep. 2014, URL works for me. works for me seems clearer. I suspect earlier clarifications addressed this. thanks that it way clearer. Daniel Migault Ericsson"} {"_id":"q-en-tls-subcerts-751d08c61aa4871f9a7de48d9e41348fb02eec4b35deb95a5ff81b4ccfb6d360","text":"Need to address and directorate reviews.\nCorrection ARTART editorial are addressed via and .\nand .\nAlso noted in ARTART review. Reviewer: Christian Amsüss Review result: Ready with Nits Thanks for this well-written document ART topics: The document does not touch on any of the typical ART review issues; times are relative in well understood units, and versioning, formal language (ASN.1, which is outside of my experience to check) and encoding infrastructure (struct) follows TLS practices. General comments: The introduction of this mechanism gives the impression of a band-aid applied to a PKI ecosystem that has accumulated many limitations as outlined in section The present solution appears good, but if there is ongoing work on the underlying issues (even experimentally), I'd appreciate a careful reference to it. Section 7.6 hints at the front end querying the back-end for creation of new DCs -- other than that, DC distribution (neither push- nor pull-based) is discussed. If there are any mechanisms brewing, I'd appreciate a reference as well. Please check: The IANA considerations list \"delegated_credential\" for CH, CR and CT messages. I did not find a reference in the text for Ct, only for CH and CR. Editorial comments: (p5) \"result for the peer..\" -- extraneous period. (p9, p15, p16) The \"7 days\" are introduced as the default for a profilable prarameter, but later used without further comment.\nReviewer: Elwyn Davies Review result: Ready with Nits I am the assigned Gen-ART reviewer for this draft. The General Area Review Team (Gen-ART) reviews all IETF documents being processed by the IESG for the IETF Chair. Please treat these comments just like any other last call comments. For more information, please see the FAQ at . Document: draft-ietf-tls-subcerts-?? Reviewer: Elwyn Davies Review Date: 2022-04-08 IETF LC End Date: 2022-04-08 IESG Telechat date: Not scheduled for a telechat Summary: Ready with nits. Just a few editrial level nits. Major issues: None Minor issues: None. Nits/editorial comments: Abstract: The exact form of the abbreviation (D)TLS is not in the set of well-known abbreviations. I assume it is supposed to mean DTLS or TLS - This ought to be expanded on first use. Abstract: s/mechanism to to/mechanism to/ s1, para 2: CA is used before its expansion in para 3. s1, next to last para: \"this document proposes\" Hopefully when it becomes an RFC it will do more than propose. Suggest \"this document introduces\". s1, next to last para: \"to speak for names\" sounds a bit anthropomorphic to me, but I can't think of a simple alternative word. s1, last para: s/We will refer/This document refers/ [Not an academic paper!] s3.1, 2nd bullet: s/provide are not necessary/provide is not necessary/ s4, definition of expectedcertverify_algorithm: \" Only signature algorithms allowed for use in CertificateVerify message are allowed.\" Does this need a reference to the place where the list of such algorithms is recorded? s4.1.1 and s4.1.2: In s4.1.1: \"the client SHOULD ignore delegated credentials sent as extensions to any other certificate.\" I would have though this ought to be a MUST. There is an equivalent in s4.1.2. I am not sure what the client/server might do if it doesn't ignore the DC. s4.1.3, para 1: s/same way that is done/same way that it is done/ s4.2, para 1: s/We define/This docuent defines/ sS/s5.1: RFC conventions prefer not to have sections empty of text: Add something like: \"The following operational consideration should be taken into consideration when using Delegated Certificates:\""} {"_id":"q-en-tls13-spec-ad42bcb6cbfe59958e0b09f5856b1e7e8a7e10730ae8b56d5d17ba62fe9eea24","text":"The 0-RTT key might differ between TLS versions (as demonstrated with the draft -20 changes). Be explicit about storing this version number since section 4.2.9 requires this information too."} {"_id":"q-en-tls13-spec-f8cb8c7145d2274dd6f0bf7518d016f5f84b51c60645349ba1256e91fedf56a1","text":"It has a proper definition in appendix A.4. Note though that RFC 4507 section 7 has allocated this value for the session_ticket/NewSessionTicket handshake type. This point may be moot if RFC 4507 has been integrated with TLS 1.3, in which case RFC 4507 should be added to the list of obsoleted RFCs.\nThe handshake packets in TLS 1.2 and earlier are numbered so that the HandshakeType for each of client and server is strictly increasing when packets arrive in correct order, and HandshakeType/10 indicates the flight number. This allows for using a simpler state-machine. The new handshake packets {server,client}keyshare break this convention by using numbers in the flight 2 range (17 & 18), while belonging in flight 1. This suggestion is to renumber them to eg clientkeyshare(5) and serverkeyshare(6)"} {"_id":"q-en-tls13-spec-0feb1dcbb5e7a4b2fdaf19836a86d210717ff0408404aa442737959fb604a610","text":"\"A client MUST provide a 'pskkeyexchangemodes' extension if it offers a 'preshared_key' extension.\""} {"_id":"q-en-tls13-spec-4f312a1bf92d88293c9b8119dc6fdb1ed2f006fbd29c58ecc5a46655be3c3a1e","text":"It appears we should have included rules for populating a registry for the pskkeyexchange_modes extension.\nYes, an obvious oversight in our oversite. :) I approve.\n+1\nFWIW, +1\n+1 to a new registry.\nYes, we should create a registry. But: We need to update the \"this document defines a new registry\" to match the new number of registries (two) the space of allowed values is only one byte, so we don't need to talk about \"first byte\""} {"_id":"q-en-tls13-spec-f0a2eef6f6a3ba7192d80421ec09fe991ff4f3289dc0997fa6b4ebb45dff721d","text":"The I-D checklist requires that updates and obsoletes be included in the abstract; thought RFC. I personally feel this is [redacted] stupid, but I do not think that we should risk a process appeal to fight this battle."} {"_id":"q-en-tls13-spec-223efddd27ef373df431d830c666b85c9c250a59f5da335bfa8a98d4aa1d0223","text":"While we're renaming these anyway, exportersecret and resumptionsecret are just as descriptive.\nexportermainsecret and resumptionmainsecret strike me as not needing an adjective in the middle at all. Perhaps exportersecret and resumptionsecret? For the exporter secret, dropping \"main\" avoids the acronym collision with the EMS extension, and there are no other secrets associated with exporters, so there's no use in designating this as the main one. For the resumption secret, there is a potential confusion between resumption[main]secret and the derived PSK associated with the ticket (section {#NSTMessage}), but we don't seem to give the latter a name in the first place so I think that's fine. This also gives us nice short KDF labels when we're able to change those: \"exporter\" and \"resumption\". (Or perhaps even \"export\" and \"resume\" if those don't fit in a hash block.)"} {"_id":"q-en-tls13-spec-95a19637583515c674662c0d499e929e20b345c450321c771dc62d62de2d8537","text":"The reason for the SHOULD is that the current text says: do a key update as described in {{key-update}} prior to reaching these limits. Note that it is not possible to perform a KeyUpdate for early data and therefore implementations SHOULD not exceed the limits when sending early data. We could of course change both.\nDTLS 1.3 introduces integrity limits for the AEAD algorithms, which is good. TLS 1.3 does not have any AEAD integrity limits, probably because the assumption that only a single AEAD forgery can be attempted. My understanding is that this is not true for 0-RTT data which works more like DTLS and where an attacker can attempt any number of forgery attempts on a single AEAD key. RFC 8446 has relatively strong anti-replay protection: \"The server MUST ensure that any instance of it (be it a machine, a thread, or any other entity within the relevant serving infrastructure) would accept 0-RTT for the same 0-RTT handshake at most once.\" but this does not stop an attacker from attempting an unlimited number of forgeries. I think consideration / mitigation of this is missing from RFC8446. Some alternative suggestions: Should the 0-RTT requirement above be strengthen to: \"The server MUST ensure that any instance of it (be it a machine, a thread, or any other entity within the relevant serving infrastructure) would process a 0-RTT handshake with the same PSK, URL pair at most once.\" should RFC8446bis continue to ignore AEAD limits for 0-RTT? 0-RTT data already has lower security (weaker replay protection)? In this case I think some considerations should be added describing that the integrity protection of is weaker than for normal application data. should RFC8446bis introduce DTLS 1.3 type integrity limits for clientearlytraffic_secret? This aligns the integrity limits of 0-RTT data in TLS with the normal application data in DTLS.\nCould you say a bit more about how this would work in practice? I did not consult the spec just now, but I had the impression that 0-RTT still had only a single attempt at forgery (at least, per replay cache instance/backend server).\nAh, the linked DTLS issue mentions long-lived external PSKs specifically. I agree that our story for those is less well specified and probably not great. Off the top of my head, I would suggest \"don't use 0-RTT with long-lived PSKs\".\nI think that the original is fine. I don't think that there is any meaningful distinction between 0-RTT handshake and the (PSK, URL) tuple, at least as far as the derived keys go. Change the PSK, PSK identity, or random and you get a new key derivation. I had expected that early data was subject to the same constraints as later epochs. The only caveat being that it is impossible to update early data keys using KeyUpdate. The answer is to finish the handshake of course, so a hard error is fine. We might advise servers to cap maxearlydata_size to help avoid any issues of overrun, though with certain perverse padding arrangements, that might not provide any real guarantee. As above integrity limits should also apply.\nURL fixes (1) The case for (3) is actually more subtle, as the problem is not clientearlytraffic_secret but rather the trial decryption skipping phase. We might need to apply DTLS 1.3 like limits there."} {"_id":"q-en-tls13-spec-db48cce3a9d880a60914093875d798c306f2a5d810bc7d3e47190a89cdda99a5","text":"MGF1 got unhelpfully spelled out as \"mask generation function 1\" in RFC 8446. I'm guessing this is the result of an acronym expansion editting pass, but the function is simply called \"MGF1\". There is no such thing as \"mask generation function 1\". Correct this back to MGF1. In hopes this doesn't get hit the same editing pass later, I've tweaked the wording and moved the citation so that \"mask generation function\" is still uttered before MGF1, and it's clearer that both primitives are defined in RFC 8017."} {"_id":"q-en-tls13-spec-f9157afb6e7c93789082a8cba0743e50ee96a3e3308333a51ea251e12cf0e504","text":"Just a few things I noticed that I think make it a little bit easier to follow. Name resumption_secret consistently when used Show full legend for each handshake flow Simplify/clarify secret sources table a bit and make text cases consistent The resumption secret naming was also apparently noticed by NAME in issue . (Edited to note: this PR has been pruned down to a smaller list of changes; see commit message)\nOk, to respond to comments I've pushed a new commit. RE: capitalization: Upper or lower is fine, as long as it's consistent. I've switched it to all upper (title case) instead of all lower. RE: legend repetition: If you're just reading one section, e.g. got to it via TOC, you might not have seen the legend (recently). I've made it less wordy now and also added a line to indicate \"+\" is for extensions, as I realized that wasn't necessarily clear. (and added a \"+\" where the notation was inconsistent) We do have an extension that was previously a message, now. The basic point is consistency; make each handshake flow make sense on its own. I think repeating the legend is helpful, but if you really don't want to repeat it that I can cut this down to just the little stuff and table. Showing \"N/A\" instead of repeating things in the table is the most helpful part here, I think. It states that SS=ES in some circumstances elsewhere, but the table isn't clear that these are the exact same values, especially because the extracted ones are not.\nEdited again and repushed for some more naming consistency.\nRevised & rebased. Of note, I think these changes to the table make it far easier to see what's going on here.\nNAME If you're going to be rewriting this, I'll just set those parts aside. The commit here is now just the little stuff: fix the spacing, add a line stating what SS/ES are in the list of derivations, and fix \"shared_secret\" (there's no such value named that; just drop the underscore).\nNAME commit updated with fixes for your comments above\nThe key derivation process uses the \"master secret\" as the basis for application traffic keys and both the resumption and exporter secrets. There are lots of other secrets though. I don't have a better name right now, so close this if none better can be found. Also s/resumption master secret/resumption secret/ throughout, it's a little inconsistent.\nNow master of more"} {"_id":"q-en-tls13-spec-f59833485b57c11f408e034826dc7644741d95081f93a94aca5a295e14f30a6b","text":"The current wording at the end of the section on resumption and PSK should be changed to indicate that if a server declines resumption then a full handshake, including the server Certificate and CertificateVerify messages, should take place. In its current state, the comment on omission of these messages is not disjoint from the statement regarding the server's rejection of resumption."} {"_id":"q-en-tls13-spec-f5fb7e5ce24d1198f54331a2fe37fa29b9ff08ca1df4e5c30019f5770290185f","text":"LGTM\nHubert Karlo writes: And how does the client know that the algorithms came from the server. We should have a \"client MUST wait for the full handshake to finish before recording this information\" or we will have a very nice cipher downgrade. Just having it signed is likely not a good idea, as they may depend on ciphersuites advertised by client.\nSince the ServerConfiguration and the suites are covered by the signature the server provides, isn't it safe to use the configuration add soon as you have verified the server identity?\nn.b., I have no problem with mandating that clients wait for finished, because it is immediately after the signature, but I wasn't the properties to be clear."} {"_id":"q-en-tls13-spec-02dfe179a4b9929ef3b92ab6ca7bfe1c534cc8229ae00f2933bc51f4096cc74a","text":"NAME did a massive message->alert change a while back and I did yet more when we were rebasing his patch. Looks like there's more up here in Closure Alerts. We're referring to alerts, not the messages that carry them. Switch to \"alerts\" instead to remain consistent with the rest of the document, notably the following section."} {"_id":"q-en-tls13-spec-0d1ecfd253663e2b7c32cc4d9351735475d67e8b167e3441c175afdb867ce63a","text":"Cleaned up references to knownconfiguration extension (merged into earlydata in draft-08)"} {"_id":"q-en-tls13-spec-ad2c22035251401aafa73efa89531db44a8ca2a1347629b85b5bac2dc73063c6","text":"Minor editorial tweaks. I just noticed the extra line that gets left in there when shown up top (two lines between first two shown blocks, instead of one). This fixes that. Also drops the periods after the comments, as they're not sentences."} {"_id":"q-en-tls13-spec-9925d690b4ea6ead3bb041a6db8a7dbc081093beecbf815e9fad3fe759bcd5c4","text":"The draft was in a sort of halfway state between whether TLSCiphertext has a fragment that is a selection or just embedding the AEAD elements directly. This should normalize it to the embedded view without the extra GenericAEADCipher struct definition. (i don't think this changes the logic of the draft at all)"} {"_id":"q-en-tls13-spec-3a830cb07d7819cdd967eb6311d6298ae1da33a4a39b46c139f2a9beb34f0508","text":"This change clarifies that the signature over ServerDHParams data is a signature over the hash of the client + server hello random random data and the ServerDHParams data. I believe this small regression was introduced in TLS1.2 with the result that the TLS1.2 RFC (and current TLS1.3 draft) does not contain sufficient detail to implement TLS + DHE. All TLS1.2 clients and servers that I have tested do include the hello random data (and omitting it would leave implementations open to replay attacks).\nThanks for the PR. Note that the formal PDU definitions here appear to be correct, so there should be enough information to implement things correctly: I agree it wouldn't hurt to have the explanatory text be clearer. However, in TLS 1.2 we stopped explicitly saying that things were hashed before signing (because we are treating signing as a primitive), so we probably don't want to say \"hash\". Maybe just say a signature over the random values and the params?\nAh, this explains my confusion. After reading section 7.4.1.4.1, I formed the opposite impression about signing; that hashing and signing are explicitly different primitive operations and may be composed in a variety of ways. But I see how that \"digitally-signed\" seems to be a reserved term that implies both operations.\nI'm not trying to discourage you here. If you were confused, others will probably be as well, so if you have some text that you think would clear this up, it would be welcome.\nThanks! I've updated the PR with an even simpler diff that I think would have made me less confused when I got to it."} {"_id":"q-en-tls13-spec-ab60678e9dfba6b293e78551c17c4190e87e03fd3cd5880bded7497cc96d3f3e","text":"Lack of a blank line after \"draft-14\" but before the bullet point causes it to render all on one line: URL This is just a quick patch to make the spacing consistent because apparently Markdown (or, at least whatever is interpreting it there) can be picky. ;) This is just a whitespace fix."} {"_id":"q-en-tls13-spec-a2240ec46c85735d8cd15e17a8979efeafaa72d68c0533fca7b5bd04c6354985","text":"NAME NAME PTAL\nLGTM\nlgtm\nDon't forget no_certificate.\nNot relevant: nocertificateRESERVED(41), / fatal / , Martin Thomson EMAIL wrote:\nRight. I just noticed that we send an empty certificate instead."} {"_id":"q-en-tls13-spec-f79a09e09ad644ff353717f930a2734942cc1f35a914e9daf1a347d00445c0d4","text":"As all error alerts are now considered fatal (PR ), we no longer need \"This alert is always fatal\" explicitly stated everywhere in that section."} {"_id":"q-en-tls13-spec-bb0b17608a0013fe1bdfae21a13ad08210946c78a22c2180819a5717961776ad","text":"No context and empty context are now declared to be the same. Add some ALL CAPS WORDS for future specifications to deprecate no context. I used SHOULD to forbid no context because, if some protocol currently uses no context, switching it to empty context is a breaking change and probably not worth it. We just want new ones to always provide a context. This affects the rules for using exporters, so RFC 5705 is added to the list of updated documents. Closes issue ."} {"_id":"q-en-tls13-spec-48607ee16cca07331ad3213a4b70c7c0ccb4908311571338b856b2f53335d731","text":"1) Switches uses of ALPN label/value to \"ALPN protocol\" (to clarify this is referring to a single protocol, not the entire extension) 2) Remove plural from external PSK case (if there's not way for the client to select which protocol the 0-RTT data uses, it doesn't make sense for there to be multiple options) 3) Switch reference from RFC7443 to RFC7301 (I assume this is what was intended) I think this makes the intended behavior of the spec a bit more clear."} {"_id":"q-en-tls13-spec-4ebd38559cbf47c0f0c77b8ee44c9d874a9a85f41b834a6afe28e142ecfd4982","text":"It seems section 4.1.3 Server Hello has not been updated to reflect the new version negotiation mechanism. says \"servers says \"servers MUST [...] negotiate the highest mutually supported version\".\nThanks, I caught a few of these on my reread yesterday too, but I probably missed some too, so keep them coming!"} {"_id":"q-en-tls13-spec-cd9967d9c2fa0564ee26948f668f619d06c37890f75968d87dec69ffba04fa7a","text":"Just noticed that the MTI suites section didn't link to the suites, so here's a minor PR with some reference fiddling for this section: Add a link in MTI suites to the suites section, now that they're defined in this document. Nitpick ordering of extensions in MTI extensions to have them in same order as sections (so numbers in links are consistient). Change plaintext reference to appendices in Security Considerations to actual references."} {"_id":"q-en-tls13-spec-9b5d93c634c595bd554bc8c7b63c6dd777a62175ca5c3a59db8c9705689cc99d","text":"Adding \"itself\" makes the sentence much easier to understand. Actually, two Japanese, who are implementing TLS 1.3 independently, took a long time to understand this."} {"_id":"q-en-tls13-spec-fe9f29eb2c085fd634bea607be196fc3b0c387ea56aee87026a0479acdec9312","text":"Ticket is a term that refers to a PSK identity established using an handshake. For out-of-band PSK, we do not use the term."} {"_id":"q-en-tls13-spec-222548c2d3397d936f57abfea11ec2078104f8dc2f5a914d8690fdfc7b607185","text":"draft_eddsa is now . sandj-tls-iana-registry-updates is now ietf-tls-iana-registry-updates."} {"_id":"q-en-tls13-spec-ac45606a08ed48aa4c2cb709ea8cd06d0183b0640b363ab3e9e133c2bd66e243","text":"Need to be explicit say that it's the ExtensionType registry from RFC 4366.\nI literally was making the same change in my editor when the notification came in! (Should it be the \"TLS Extensions ExtensionType registry\"?)"} {"_id":"q-en-tls13-spec-ee73af626d4cec286f6294b7f35b786ab175a61afee38df50e54f404378cca7b","text":"When we banned client auth and PSK, we only meant to do it for the main handshake, not the post-handshake phase. This reverts that change, as well as clarifies the prophibition on PSK plus cert-based auth."} {"_id":"q-en-trickle-ca5ccf38a10bac7f822d24248bcd3a8bd86a8bfe83d908deeca47a6e4ca85235","text":"Changed \"media stream\" to \"media stream and component\" in one place, and made some grammatical changes to the definition of an ICE generation."} {"_id":"q-en-using-github-778bdc61a0b07d47cf2f655e7f705840c6cfe274bc66e4e279b67b6ba9d7d726","text":"A good start. Would you like to open another PR adding yourself as an author?\nLet me know if what I created (experiences + author) needs to be something different. This is a very scary world for me. Ultra light description is next. Barbara"} {"_id":"q-en-using-github-71e1019235713fb9b45ac9228ae27287d076e1f32404c878513fab3a2ebfb08f","text":"Two changes, two issues, but really this is the one idea. The previous draft said that features not used could be disabled. This is bad advice because that might be denying editors access to tools that would make their task easier. The new advice says that editors can use features that the working group doesn't rely on. It also says that chairs should consult editors to ensure that policies aren't adversely affecting them.\nMost of the operating modes assume that editors are given considerable latitude in how repositories are managed. This is good up until the point that the Working Group starts to formally rely on things that are in the respository. The document sort of prevaricates about this point, including strong admonitions that suggest rules be formalized for every aspect of usage. Instead, I think that it would be better if the document said - up-front - that editors are to be given control unless the Working Group starts to formally depend on a particular function (like issue opening/closing, labels, pull requests, whatever...). At that point, it is the chair's responsibility to identify what the WG depends on and to clearly articulate the policies around that usage.\nSection 3.1 talks about turning off GitHub features not being used by the WG, but it doesn't say how to do that in the UI. I ask this because I cannot see a way to prevent issues or PRs just by wandering through the GUI.\nThis isn't something that I'd want to publish. The way that features are configured is too tightly coupled to UX decisions that GitHub might remake and any point. In fact, I might prefer not to include the offending text at all. Disabling features can be a useful way to slim down the options presented to users (which can help new users), but it can also be fairly hostile to new contributors. For instance, if the working group disables issues because they aren't using them, it sounds reasonable, but new contributors might then be dissuaded from opening a genuine issue. Working groups might be better served by informally letting contributors know how to best provide their input.\nI'm happy with either the current text plus \"At the time this is written, to turn of y\" or removing the suggestion."} {"_id":"q-en-using-github-dce8a2ed7c37e92d6329ea35055f4d79b62e9fb72807ea63fc3c103e04f5db10","text":"This is the way we've been treating this, so let's acknowledge that.\nAs I noted on the list, I don't think this is correct. I proposed alternate text.\nStill LGTM. :-)"} {"_id":"q-en-vulnerability-scenario-90f24c263e047ae19e4354745fc8b528f630fb3fa4072c04a688c753abafbdd9","text":"We should review the appendices and include the appropriate references to them through the main body of the document."} {"_id":"q-en-webpush-protocol-427f51d785e8147289d3d0a855e76cf079ba29e4982860ca42e81afd7207c60c","text":"I've specified a Topic header field that we can use as a \"collapse key\".\nRegarding max topics, I don't plan on limiting the max topics. I'm using a sharded db with a compound hash+range key for indexing, and plan on storing the topic in the range key for direct access to delete it for later messages with a topic field. There is the side-effect that the UA will get messages out-of-order as a result. ie. all messages with no topic will be delivered in order, then messages with a topic will be delivered in the order of their topic names.\nWe currently impose a limit of four topics (née collapse keys) +NAME I do like the change. GCM has a separate feature called Topics, i.e. subscriptions to a category, which may cause some confusion. Thinking about the name without taking this into account, I still think it makes sense in this context. URL\nNAME yup, the common use case of sending \"You have N messages in your mailbox\", makes sense to just tell a developer to add the topic \"Inbox\", and the user will always get the last one sent in. I am curious about how systems having topic limits plan on communicating this to the app-server. Is there some minimum the spec should mandate just so that an app developer isn't surprised later when their app needing 5 topics doesn't work right on a browser that has topic limits?\nWould 429 work for systems that run out of topics (or collapse keys)? That seems right to me.\n429 seems to specific to be used for this scenario: The 429 status code indicates that the user has sent too many requests in a given amount of time (\"rate limiting\"). Could be a 500?\nIs there a length limitation on the Topic? This could also be implementation-specific and signaled by: 431 Request Header Fields Too Large (RFC6585)\nI think that NAME and I concluded that we could accept any size of topic. Up to the limits on header field sizes, that is, which would trigger 431. Maybe, but it is the best match I have. And I think that 4xx is better than 5xx.\nIf you continue with 429, how does an application server discern between 429 Rate Limiting and 429 Too Many Topics? By the presence of Retry-After?\nRetry-After might be present in all cases. However, you raise a point that the definition of 429 leaves open: Is this too many requests globally? By the same client? To the same resource? To the same resource by the same client? Basically, the \"scope\" of applicability for 429 is hard - if not impossible - to determine. The scope is defined in some server specific terms. The question I'd ask is whether we want to provide some webpush-specific mechanism for identifying the scope of applicability for the rejection.\nWhat about using a Scope header field in the response? In this case it will be 'Topic', maybe for too many messages for a subscription it could be 'Subscription', etc.\nAll these things are possible. The question is how important it is. Also, if we wanted to address the scope problem, I would want to address it more generally and then maybe add any push-specific extensions as necessary. This is a problem with 429 that - if it is worth fixing here - is probably worth fixing generally.\nMaybe this was answered elsewhere, but why not just reusing the push message URI? To replace a previous message, the application would create a new message and include the previous message URI in the creation request, inside a \"Replace\" header. Otherwise, it would work in the same way as this proposal. This would be easy to handle on the Push Server side. On the application server side, this would mean storing the previous message URI. But the application server needs to store the topic of the previous message anyway.\nNAME the idea here was that remembering the push message URI was a burden we didn't want to impose on applications. Those that aren't using acknowledgments will be able to use this version without holding any special state. Thanks for the grammatical review, I'll fix those up.\nNAME an application wanting to be able to update a message will need to keep the topic of this message, so it will need to keep this state. I don't think keeping a topic or the push message URI is a large difference. However, it's true that simple applications could use only 1 topic, which would be predefined (or a few predefined topics).\nNAME an application can hard-code a topic or small set of topics. They can also derive a topic from context. A URL needs to be first learned, then memorized.\nNAME do you have any opinions on this change?\nUpdated based on feedback from NAME Note that I haven't changed the name. Not sure that we have consensus on that yet.\nConversation starts : How does the PS know that AS can accept more receipts, and how does it know a receipt has been accepted by the AS ?\nTwo-Pipe model a la mode de TIP? URL\nNAME started on HTTPbis\nCostin : we discussed my concerns on flow control for push promises, and I think it's reasonable to have them addressed either as part of http2 or as a separate document. I'm closing this issue for webpush since it's no longer actionable - NAME - re-open if you want to track for \"futures\". I'll tag appropriately. +1 - we discussed my concerns on flow control for push promises, and I think it's reasonable to have them addressed either as part of http2 or as a separate document. Other than that I think it's reasonable and stable base - extensions and features can be added on top of it. Costin Richard Maher wrote: >\n+1 But I'm not sure we can support the mode that Martin described, so I think we may take advantage of \"PS MAY return a new receipt subscription if it is unable to reuse\". Buffering and flow control are complicated - the choice is between dropping receipts or sending them to a different DC. I haven't checked, but I think there is no guarantee on saving receipts if AS doesn't have enough ingress capacity. One question for HTTP/2 experts: I assumed the receipt promises are subject to normal 'max concurrent streams' ( and this will be the main mechanism to do flow control/balancing from PS to AS). However I see rfc7540: ... \" Note: The client never sends a frame with the END_STREAM flag for a server push.\" How does the PS know that AS can accept more receipts, and how does it know a receipt has been accepted by the AS ? Costin Brian Raymor wrote: >"} {"_id":"q-en-webpush-protocol-57588e180ca26566d36e18a3f3df3b80755cf05e90e8e5d5f189774058fdabcf","text":"This is a work in progress, but ready for an initial review. I will add more commits to the branch for dealing with IANA registration of the header field. I also need to do some massaging of Section 7.2 to account for changes there. I'll have a follow-up pull request that deals with learning the server TTL. I think that the simplest thing to do would be to have the server echo the header field, using a lower value if it doesn't support it. Application servers can discover the maximum by sending a large value, like . The alternative is a declarative push service policy, but I want to encourage an adaptive policy with respect to TTL.\nAn alternative to echoing the TTL is just using the header field, as is currently described. I think that might actually be sufficient. That still leaves user agents unable to determine whether they need a full resync though. We need something for .\nLooking at the semantics of and in relation to and , I think that it would be unwise to use them here. Current plan is to echo .\nIt was less clear that this was needed than was, but the idea is that it would be good for a user agent (or maybe an application server too) to be able to learn the maximum TTL that a push service was willing to maintain.\nDiscussed today: conclusion was to have subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nIt should be fine if we let subscriptions expire if the server is unwilling to maintain them over a long period of inactivity.\nThere is a related failure case in the current draft related to acknowledgements: The push server MUST push a response with a status code of 5XX (TBD) if the user agent fails to acknowledge the receipt of the push message or the push server fails to deliver the message prior to its expiration.\nDiscussed. Discovery of maximum TTL is not required at this time.\nThis is just for the request from the application server to the push service."} {"_id":"q-en-webpush-protocol-c778fad8586884b2cb602db41e115822e4ba41329f31bc64e83f7435f47bed8f","text":"Pretty good. I think that we should at least try to put some MUST-strength language in regarding confidentiality, etc... That makes writing the rest of the security considerations that much easier (you don't have to worry that someone might have ignored your advice and account for it).\nLooks good. Further refinements can be done later :)\nThere is an adequate explanation of the related W3C Push API in the Introduction. Additional references about specific API behavior in Section 9.3 and 9.4 can be removed.\nBesides removing extraneous API references, informative references should be added for the separate drafts for optional content encoding and authentication.\nMoving to -05. Waiting for a decision on the VAPID call for adoption."} {"_id":"q-en-webpush-vapid-fac156bc960b05507a1c207f1f0da8cb4c422283a281a124a637be3e88b6996e","text":"Found some minor stuff in the draft, and had two questions (below). If I should send those to the list instead, let me know :) Does an application server have to have a unique keypair? E.g lets say I have multiple application servers sitting behind a load balancer, could I just share the keys used for vapid between them? In 5.2: Why is a MAY? If I, as a UA, create a restricted subscription my assumption would be that that is actually checked, right? If the Push Service is not planning to check it anyway, wouldn't it make sense to tell the UA directly by responding w/ an error at restricted subscription creation time?\nThe concept of an application server does not define the quantity— it's fine for multiple servers to share a single key. Martin added text allowing additional claims () which further clarifies that it's fine to include more specific information, for example which server created the message, which is useful if the push service has means for dealing with this. (E.g. QoS or analytics.) I don't recall why that's a MAY. We certainly shouldn't forward unauthorized messages for restricted subscriptions to the user. /cc NAME The changes themselves look good, thanks! :)\nThanks for the explanation NAME Should I open an issue regarding the ?"} {"_id":"q-en-webrtc-http-ingest-protocol-c0945eedc2b8539e9c9cabeede23184a19c1c8eb4b2e7da43035ebd0601ae761","text":"I have changed the by as it is the terminology that rfc 8443 uses. Also quoted the sdp attribute values. NAME wdyt?\nadded tentative text\nThe text says: \"Unlike [RFC5763] a WHIP client MAY use a setup attribute value of setup:active in the SDP offer, in which case the WHIP endpoint MUST use a setup attribute value of setup:passive in the SDP answer.\" First, check whether it would be useful to also reference RFC 8842. Second, I think it is a BAD idea to go against the \"MUST use setup:actpass\" text in RFC 5763. Since you specify that the answerer MUST use setup:passive, there is no reason why the offer couldn't use setup:actpass, following the standard.\nThe idea is to prevent the WHIP client to have to implement active and passive setup. The MUST for the server is only in case that the client uses the setup attribute, maybe some rewording helps\nYou can still use the MUST for the server, even if the client uses 'actpass'.\nYes, we could, but would it be worthy? If the client sends , server MUST send . If the client sends , then the client MUST support to receive either an or response from the server, so I don't think we should put any extra requirement on the server. If the media server wants to only implement they are allowed in any case.\nThe document specifies the behavior of the media server, so even if the standard allows both 'active' and 'passive' you can still specify that a WHIP compliant server MUST send 'passive'. It is better than having the client do something which is not standard compliant.\nok, I get your point. The idea would be then to ask the client to send on the offer, despite only supporting the part, and force the media server to reply , right? Not sure if I like it or not, will ping the mailing list for consensus.\nYes. Specify that a WHIP-compliant media server MUST reply 'passive'.\nIf a MUST has to be added on servers, I think it should be 'active', not passive. It's an unwritten convention in basically all WebRTC applications that who receives the offer will send 'active', and while a convention is not a rule, forcing a change in this pattern would break a lot of implementations out there that may want to support WHIP on the server side.\nWhich is exactly why I wanted the original language that the client could say setup:active to indicate that it only implements DTLS client. Which isn't as unlikely as you might think, especially on devices with a secure element providing the DTLS certificate. It makes no sense to have an offer of actpass and then not actually allow both replies. Either you believe in the O/A model or you don't. By the way all my 'server' devices send answers with setup:passive because they are servers so running a DTLS server makes perfect sense. Seems to work fine...\nAn implementation can of course send whatever it wants, including setup:active. There is nothing anyone can do about that :) But I don't think we should standardize such behavior. As far as the WebRTC and SDP O/A standards are concerned, the offerer can act as both DTLS client and server. If that is the wrong assumption, then I think it shall be discussed in the affected WGs, and based on those discussions potentially update RFC 5763, 8839 etc. I don't think a standards document shall discard MUSTs just because we don't want to support required functionality. But then it should not matter if the offer contains setup:actpass, should it? :)\nI fully agree with Tim. WHIP complicane with WebRTC means that WebRTC endpoints (or whatever name we give to them) can act like a WHIP client, but not all WHIP client needs to be fully compliant WebRTC endpoints. For the server side we are already reducing the requirements of a JSEP client as explained in the gateway draft: So, I would challenge that reducing the requirements on the WHIP client makes WHIP non-webrtc compliant. The idea behing this change is that there is no need for a whip client talking to a whip server to have to implement both active and passive setup. Negotiating actpass when only one mode is supported seems completelly wrong to me.\nWe've already found out that Tim always sends passive from servers, while Janus always sends active when it receives an offer: this means that whatever you force on the WHIP client will break one of the two servers. I think clients should send actpass as the RFC says.\nMandating clients to implement both active and passive is a no go for me. The goal has always been to reduce the burden on the clients because there was none available. Increasing the complexity of implementing them is not going to help.\nbtw, I think that Tim is referring that his servers respond as passive, while their clients sends active. So his (whip) clients would be able to talk with janus server, which is what we intended.\nNo, they wouldn't, because Janus expects actpass when it receives an offer, and will set the role of active for itself. Two actives won't get anywhere.\nWhen you say 'clients', what exactly are you referring to? The WebRTC library that you are using in the application?\nNo, I am referring to the WHIP clients\nImagine I have a smart camera acting as a WHIP source which can only do Active DTLS (because of a hardware limitation). Let's assume when it tries to talk to Janus it will fail because Janus will only do active. If the camera signals this in it's WHIP offer with setup:active then the O/A fails early with a sensible error message http status:402 If it sends a (mendacious) setup:actpass then the connection will silently fail after ice connects and a 16 second DTLS timeout. So sending setup:active in the offer is the right thing for the camera to do. The other option is to say that such hardware shouldn't be doing WHIP, which seems like a needless limitation on hardware encoder design.\nIncidentally I checked my (open source) Whipi client and it actually sends ! Which is why it works with Janus. (The target hardware (raspberry pi) does not have a secure element, so it is just a code size simplification not a hardware limitation - It still halves the complexity of the DTLS layer by only having to implement dtls server)\nBack to the High level point. The WebRTC spec was designed to be Peer-to-Peer - so it makes sense that both sides need equal capabilities since you never know who will initiate an offer - so mandating actpass in the offer is a good idea. WISH/WHIP is explicitly client-server so asymmetry is to be expected. I would argue that the original wording is wrong however - \"WHIP endpoint MUST use a setup attribute value of setup:passive in the SDP answer.\" Should read \"WHIP endpoint SHOULD use a setup attribute value of setup:passive in the SDP answer. Or if it does not support passive it MUST reply with an http 402 error status and an explanation error message.\"\nI think it shall be MUST. If the server cannot do that for whatever reason it will obviously have to return an error. We can for sure specify which error code to use. Also, I think it is better to say that the server MUST follow the procedures in RFC 4573. Regarding the WHIP client, a compromise would be to say that it SHOULD use actpass, according to 4573, unless it only implements the DTLS client role in which case it can use active. Because, we should not forbid a WHIP client from using the standard 4573 behavior if it is able to do so. In addition, there could be a note saying a few words about why a WHIP client might only implement the DTLS client role.\nHere is some new text suggestion (I have not created a PR yet) for Section 4.2. Note that I also modify the BUNDLE text. The SDP BUNDLE mechanism [RFC8843] MUST be used by both WHIP clients and media servers. Each m- line MUST be part of a single BUNDLE group. Hence, when a WHIP client sends an SDP offer, it MUST include a bundle-only attribute in each bundled m- line. In addition, per [8843] the WHIP client and media server will use RTP/RTCP multiplexing for all bundled media. The WHIP client and media server SHOULD include the rtcp-mux-only attribute in each bundled m- line. When a WHIP client sends an SDP offer, it SHOULD use the setup:actpass attribute, as defined in [RFC5763]. However, if the WHIP client only implements the DTLS client role, it MAY use the setup:active attribute. NOTE: [RFC5763] defines that the offerer must use the setup:actpass attribute. However, the WHIP client will always communicate with a media server that is expected to support the DTLS server role, in which case the client might choose to only implement support for the DTLS client role.\nI like that and I think it is a good compromise, creating the PR now\nI have split it in two PRs so we can review/merge them independently: BUNDLE: URL SETUP: URL The include the proposed changes by NAME with a bit of rewording to match the notation of the referenced RFCs, and include that the server SHOULD response with a 422 error if the setup:active is not supported.\nWhen reading this again, I think there still need to be some changes: both the WHIP endpoint and the media server need to support BUNDLE/multiplexing: the WHIP endpoint needs to support the BUNDLE SDP extension, while the media server needs to support the multiplexed media associated with the BUNDLE group."} {"_id":"q-en-webrtc-http-ingest-protocol-9b007690b830600c0559cfef6a4be4d7dda30792abcb38ca4f3c8e4104a6ebf9","text":"Consider a client that does Since the ICE gathering process has started at the time of , it is too late at this point to call . Thus, a way is needed to find the set of ICE servers before the offer is POSTed to the server. In his mail of 12 July 2022, Sergio suggests a different procedure: This alternate procedure should work, but it is somewhat unusual; thus, the spec should describe this procedure in detail, and indicate that this is the procedure that the WHIP client must use if it wishes to use the server-provided ICE servers. What is more, with this procedure the posted offer does not contain any local candidates, which might imply that client-side Trickle ICE is a requirement, at least in some environments. Again, this should be discussed in the draft.\nThat is not true. If server is on a public IP address or behind a port forwarding NAT. The candidates sent by the server are enough to establish the ICE connection. Gathering all the ICE candidates on the local offer are only required if the client does not support trickle and the server is behind of a NAT that requires hole punching. I don't think we should explain how to use webrtc apis in the spec.\nSo WHIP is restricted to the case where the server is on a public IP, with no firewall? That would be fine with me, but if that's the case then IMHO it needs to be stated explicitly in the draft. What happens if the client is on a restrictive network that only lets outgoing TCP through, which is unfortunately still the case in many university networks? Are you assuming that the server supports passive TCP ICE? If so, should that be mentioned in the draft somewhere?\nThe turn server configuration via OPTIONS is not going to be removed. I can add a note like:"}